Artificial Intelligence (AI) is reshaping the educational landscape, offering tools that personalize learning, automate routine tasks, and expand access to knowledge. As educators grapple with integrating these technologies, understanding their potential and pitfalls becomes crucial. This is where perspectives from experts like Dr. Torrey Trust, Professor of Learning Technology, can be invaluable. With years of research, teaching, and service in the field of educational technology, Dr. Trust provides a vital perspective on emerging technologies and how they influence learning.

Torrey Trust
Professor of Learning Technology in the Department of Teaching Education and Curriculum Studies in the College of Education, UMass Amherst
- Spotlighted in both ‘20 and ‘24 as one of the Top 30 Higher Ed IT Influencers to Follow.
- Amongst other awards, Dr. Trust received the 2023 University of Massachusetts Amherst Distinguished Teaching Award.
- Her work centers on the critical examination of the relationship between teaching, learning, and technology; and how technology can enhance teacher and student learning.
What do you think are the major challenges AI poses in maintaining academic integrity, specifically concerning plagiarism?
I think the biggest challenge is determining where student thinking ends and where AI contribution begins. With the initial launch of ChatGPT, students might have asked it to write an entire draft for them, which is clearly plagiarism. But now, Grammarly, Copilot in Microsoft Office tools, and Gemini in Google Workspace apps can write text, autocomplete sentences, and rewrite entire paragraphs. Even the popular graphic design tool, Canva, has AI tools that can write text for posters, design presentations, and create visuals with simple prompting (e.g., “create a poster about the Psychology of Play”).
At the same time that we have technologies that can blend AI- and human-generated content, there is no true way for faculty to prove students’ work is AI-generated (AI text detectors are notoriously unreliable and especially problematic for non-native English speakers). This leads to the dilemma of should we be policing students for their use of AI or should we be teaching students to critically examine when their thinking and writing ends and when AI takes over, and ask them to report on whether their use of AI actually aided their learning or subverted it (see Marc Watkins’ AI-Assisted Learning Template to help students be more transparent about their AI use in assignments, homework, and class work).
How do you differentiate between the legitimate use of AI as a learning tool and its misuse as a shortcut that undermines academic honesty?
I think that when a student uses AI to do their thinking, writing, communicating, reading, or other work for them (e.g., summarizing lecture videos), that undermines academic integrity because the purpose of academics is to help students develop their ability to think critically and communicate clearly and act as informed citizens in a democratic society. I also think that using AI just to look up information (the most commonly reported use of AI by higher education students) is a misuse because GenAI tools are not designed to present factually accurate information, they are simply guessing machines (they guess which words go together to make the most plausible sounding response). I also think that using GenAI media generators to create images, video, or audio instead of first looking for public domain or creative commons media, which would actually give attribution to authors and artists, is a misuse. However, I do think there are helpful ways to use GenAI tools, in particular, when these tools can help students engage in low stakes trial and error learning (like practicing a debate before doing the debate live in person) or simulate an experience (like a job interview), aid creative thinking (e.g., brainstorming ideas for innovative ways to do a class presentation), offer immediate feedback (e.g., students often have to wait weeks to get a grade and feedback from their instructor or TA), and even prompt students to engage in thinking (e.g., a choose-your-own-adventure interactive story in which students are prompted to determine the best path to take).
How should educators adapt their assessment methods to account for the increasing presence of AI tools?
I think educators should seek to find a mix of assessment methods so students have multiple ways to showcase their learning. Exams are fine to use as assessments, but if students only have 1-3 high stakes exams as a way to showcase their learning, they are more likely to turn to AI tools for assistance or even cheating. Banning tech for all exams is not the best solution either, as it can create more obstacles for disabled students and non-native English speakers. I think there needs to be a mix of exams and authentic assessments (e.g., hands-on projects), as well as several low-stakes ways students can demonstrate their learning (e.g., brief quizzes or exit tickets). It’s also important to discuss academic integrity with students — what do you allow when it comes to using AI, and what do you not allow; if students don’t know what you allow, then they will be left guessing, and if they guess wrong it can lead to serious consequences like failing a class or losing a scholarship.
Strategies to faculty to prevent plagiarism in an age where AI is so accessible by Dr. Trust:
- Trust. T. (2023). Essential considerations for addressing the possibility of AI-driven cheating, Part 1. Faculty Focus.
- Trust. T. (2023). Essential considerations for addressing the possibility of AI-driven cheating, Part 2. Faculty Focus.
What measures can be taken to educate students about AI's appropriate and ethical use in their studies?
Every subject can incorporate AI ethical issues — just ask AI! Seriously, if you’re teaching Nursing and want to incorporate the ethical issue of AI’s impact on the environment, ask a GenAI tool to come up with 10 ways to do this (here’s a suggestion from ChatGPT: “Have students analyze real-life scenarios where AI improved nursing care but involved high-energy usage. For example, AI in telemedicine during the COVID-19 pandemic. Discuss the trade-offs between healthcare advancements and environmental sustainability.”) Here’s a slide deck of AI ethics I created to help instructors think through these issues.
Based on your own teaching experience, have you encountered instances where students misused AI? How did you handle it?
Yes, a couple of times. Each time, I let the student know that I thought their work looked AI-generated. Usually it was because their tone of voice changed from previous writing, or they wrote about things we didn’t cover in class, or the writing was too generic during a personal reflection. I tell them that I’ll be reviewing their work more closely from now on and if I see what I think to be another instance of AI-generated work, they’ll have to meet with me to provide evidence that their work is their own. So far, it hasn’t escalated to that.
How do current plagiarism detection technologies fare when it comes to identifying content produced or assisted by AI tools?
They are terrible, never use them… Simple as that! Here’s a slide deck on the perils of AI Text Detectors.
What policies should institutions implement to address the ethical use of AI in academic work?
I think that institutions need to create policies that are flexible enough to allow instructors to determine what works best for their own classes. But I think every instructor needs their own policy and needs to explain that clearly to students, otherwise it leads to a lot of guesswork. For example, if an instructor bans all AI in a class, but a student uses AI to summarize a reading to help them understand it better, is that academic dishonesty? Instructors need to make this clear. Some institutions have 4 statements ranging from no AI to full AI use that instructors can incorporate into their syllabus. I know that UMass is currently revising its academic integrity standards to reflect the age of AI. But I still think instructors should be clear and transparent about their perspective from the very first week of the semester.