Generative AI in the Classroom
Syllabus & Assignment Statements on Generative AI
The availability of generative AI tools is now so ubiquitous that we all–students, faculty, and staff–can hardly avoid it. The TLC supports faculty in providing clear and frequently-referred-to guidance to students in your classes regarding if, how, and when generative AI is appropriate to use.
A syllabus statement regarding generative AI is a necessary place to start. We also recommend providing specific guidance on each major assignment in your course.
Consider using a tool like the AI Policy Generator to help create these syllabus and assignment statements quickly and easily.
Detecting and Responding to Student AI Misuse
AI detectors rely on patterns rather than voice, style, specific content, or learning processes; they are unreliable at best, and can fuel social inequities at worst. You will receive false positives and negatives that are often biased against both writers with the greatest challenges (including students with learning disabilities, in need of additional support, or non-native speakers of English) and writers with the most fluency. (See AI Cheating Crisis: The Guardian.)
Instead of relying on problematic tools, use this guide to aid you in avoiding AI plagiarism and detecting AI misuse in your classes.
Stage 1: PREVENTION & FOUNDATION
Establish a clear AI Syllabus Policy. Consider using this SUNY-created AI Policy Statement Generator to help.
- Point to your syllabus policy early and often. Explain the purpose of the policy with your students.
- Have an open conversation with students about their prior experiences with AI. What you consider “cheating” they may not, and vice versa.
- Provide examples of misuse of AI.
- Repeat, reiterate, and retrain students on your policy throughout the course and with every assignment.
**For those with an open or restricted AI policy:
- Educate students on how to use general AI effectively with basic prompt engineering.
- Move from general AI, like ChatGPT, to narrow AI tools.
- Identify which tools are most effective for your assignment(s). Regularly and repeatedly live demo these tools.
- Offer practice opportunities in class with follow-up discussions on students’ AI experiences.
- Incorporate AI reflections into your course design; these can be simple prompts added to your original assignments or longer versions focusing solely on AI use.
Sample Reflection Prompt: “A.I. & Your Voice”
If you used any A.I. tools at any stage of this assignment—such as brainstorming, researching, organizing, drafting, or revising—tell the story of how they fit into your writing process. Make sure you reflect on the following:
- Identify the AI tools you used and the stage(s) of your writing process where you applied them.
- Describe how these tools supported you in areas where you faced challenges.
- Discuss any limitations or drawbacks you encountered while using the AI tools.
- Explain the steps you took to ensure your final work reflects your own thinking, decisions, and voice.
- If you chose not to use AI tools, describe your reasons and reflect on how that choice influenced your writing process.
- (Optional) Include the transcript of your AI interaction.
Stage 2: DETECTION AND RESPONSE
Rather than trying to police or “catch” students, aim to support ethical academic habits. The best deterrent is a classroom culture where students feel confident in their own voice and know where AI fits—and where it doesn’t.
- Point to your syllabus policy early and often.
- Ensure students understand AI misuse as “plagiarism” by pointing to the Academic Dishonesty Policy in the Geneseo Student Policy Handbook
- Specific strategies for detection beyond tools like Turnitin or GPTZero can be found below
Strategy 1: Establish baseline assignments
Within the first two weeks, assign a low-stakes, in-class assignment to establish each student’s current ability. This benchmark will allow you to more easily recognize significant shifts in tone, complexity, or phrasing in later work. Establishing a baseline will also aid you in detecting later AI misuse.
- Use the same rubric categories (such as tone, clarity, organization) later to detect gaps.
- Connect to course themes, but do not require content knowledge
- Complete unassisted—done without access to AI, spell checkers, or grammar tools
- Complete live or in a timed setting
Strategy 2: Look for common AI phrasing
As you become more familiar with AI misuse in your field, you’ll notice common phrases; these are often vague, broad, or “empty.” Build a list of these phrases that you share with your students, and discuss why they are not helpful in your context.
(Please note, these phrases may overlap between common AI output and in learners trying to develop their own academic voice. Common, slightly odd phrases alone may not signal academic dishonesty–they can simply indicate a student trying to emulate a professional tone.)
Strategy 3: Visit the Works Cited/ References page and explore several sources.
Current MLA and APA citation styles include a URL or DOI (digital object identifier) for online sources. Do those links take you to a “real” source? If not, the student might be misusing AI.
- Look for paywalls when you click on a reference: it’s unlikely the student paid for the source.
- Click on the link and look at the URL. If it says ChatGPT in it, for instance, this is a red flag.
- Skim the project’s in-text citations and match them to the end citations. If all sources are not discussed in-text, the student might be misusing AI.
- Randomly choose a cited author and search them through Google. AI will “hallucinate” credible-sounding names that don’t exist (e.g., “Dr. Lisa Hammond, Harvard University”), or make up plausible paper titles that an author didn’t actually write.
- Check the dates! AI will very often date things in the current or past year–publication years that don’t match real-world availability.
Strategy 4: Hold follow-ups or conferences
AI is new, and we’re all adjusting to it. Students may need additional support and guidance to use AI ethically. If you notice red flags, follow up by keeping the tone supportive: the goal is to promote academic integrity, and instill good practices moving forward.
- Students who have written their own work can usually articulate their ideas, explain their reasoning, and walk you through their drafting choices.
- Students who relied heavily on AI tools might struggle to articulate the purpose of the assignment. You can also look for hesitation or vague explanations when asked about their thesis, structure, or evidence; inability to recall how they developed certain ideas or chose specific words; uncertainty about cited sources or claims made in the paper; inability to repeat basic parts of their work.
- Reiterate, repeat, and retrain the student in the ethical uses of AI in your course.
Strategy 5: Escalate the response
If a student has had a clear conversation about AI misuse, understands the expectations, and repeats the offense, then it’s appropriate to escalate the response while still following a fair and documented process.
The age of digital tools for plagiarism detection has ended. Think of this process like a trial whereby you are “prosecuting” based on solid evidence and proof that you have gathered over time instead of the detection tools of the past. Make notes after every interaction, and document everything!
- The initial incident and resolution (e.g., warning, revision)
- The writing conference or discussion about AI use
- Any written acknowledgment from the student (if applicable)
- The repeated offense (specific examples or red flags)
- Collect assignments, drafts, and any communication
The specific disciplinary action is determined by the professor, who has the discretion to decide on an appropriate consequence. Consult the Dean of Students and the Student Code of Conduct for next steps.
This section was adapted in part from “Detecting AI Two-Pager” by Dr. Johnny Stein, Jamestown Community College.
Optimizing AI in Higher Education
For Educators
NEW RELEASE, December 2025.
“This guide both builds on and moves beyond the work of two previous guides, published in 2023 and 2024, addressing the ways in which AI has become increasingly integrated into teaching and learning in higher education and providing guidance to support faculty and administrators in navigating this continuing changing landscape.”
This edition focuses on three key areas that remain ethically and pedagogically challenging:
- policy development
- evaluation of AI tools
- creation and use of AI tutors
Access the current edition of AI in Action here.
Student Guide to Artificial Intelligence
For Learners
Elon University and AAC&U continue to update their Student Guide to Artificial Intelligence. Visit the link below to access the most current version (2025).
Originally published in the fall of 2024, this comprehensive guide for students is a very reader-friendly review of the different terminology, use possibilities, and strategies for effective use of AI technology. It contains a healthy perspective on the importance of fact-checking, ethical use, and building skills through traditional writing and revising strategies.
Shared with a Creative Commons license, this guide is recommended for all higher education students and can be shared broadly with your classes.
How Generative AI is Impacting Geneseo
The Geneseo Center for Digital Learning is curating a collection of lessons, articles, and blog posts showing the ever-expanding way that Geneseo is responding to artificial intelligence. See what some of your colleagues are doing in their classroom practices here, and consider sharing your own work as part of the collection.


