I Stopped Students Using AI This Semester
The results—and the reactions—were startling.
On the first day of the Fall semester, I leveled with my students: if I gave them the usual take-home essay assignments, I knew a lot of them would use ChatGPT to complete them, regardless of my no AI rule. And, as I told them, I have a hard time blaming them: the use of AI has become virtually undetectable, and they are under unique pressures to work quickly and do well relative to their peers. They typically need a great GPA and a slate of extra-curriculars to compete in this economy. Using AI could be the rational thing to do, even if it is arguably immoral to do so against my explicit instructions. I am not someone who believes morality is always overriding in its importance though, so that puts me back to square one as an instructor.
But I feel strongly, as I explained, that their AI use will prevent me from doing my job in helping them to grow as thinkers and writers. I’m a philosophy professor, and the exercise of producing argumentative essays matters not at all for its own sake. Producing the product matters only for the process, inasmuch as that process—writing—helps them develop argumentative skills and become better critical thinkers. And according to the results of a recent MIT study, the many, many students who now use AI to write their essays will learn depressingly little. Strikingly, even students who used a search engine learned less than those who wrote using their brain only. The “brain only” group also produced better work, retained more content, and enjoyed a greater sense of ownership over their work, among other positive outcomes. As the study’s authors summarize: “Over the course of four sessions, which took place over four months, the LLM group’s participants performed worse than their counterparts in the brain-only group at all levels: neural, linguistic, scoring.”
And so, as I told my students, I was going to do my best to prevent them from using AI, by moving to all in-class writing assignments. Everything they wrote would be on paper and in the presence of me and my TAs. This is not because I don’t trust them but because I think they are rational. And I don’t know that I would have eschewed the use of AI as a student before the process of taking blue book exams in college in the early 2000s made me fall wildly in love with writing.
Here’s how it all worked, and here are the outcomes of what felt like a big gamble but a worthwhile experiment.
Keep reading with a 7-day free trial
Subscribe to More to Hate to keep reading this post and get 7 days of free access to the full post archives.

