The College Essay is dead – Long Live the 'AI Viva'
A simple solution to the problem everyone is talking about on campus
Thirty years ago I was trained to write essays. The instruction began in my first week at Oxford when asked to summarise what could be learned about Homer, in 6 or 8 handwritten sides of A4, from Milman Parry’s 1930s field work on the oral poets of Southern Yugoslavia. The expected conclusion? That no such person as Homer had ever existed.
The programme continued weekly until, by graduation, we had learned to conjure up a zinging argument on any topic, never mind what we actually knew about it. The concept had not changed greatly since the 17th century when John Milton was firing off his Prolusiones - entertaining latin speeches on such essential matters as whether day is better than night, and if orators should use long sentences. Or short ones.
But now we have AI – and there has been much gnashing of teeth over the death of the college essay. If asked to write one today, students can just ask the AI for a first draft, before spending half an hour (or less) reading through it and adding a few small changes to make it their own. You or I may think this is cheating – but they may see it as a rational embrace of the latest technology to solve the problem at hand, in much the same way as previous generations embraced the internet, or the ink cartridge.
And yet the negative impact on learning is clear. When the chatbot writes the essay, the student learns little and remembers even less. Tragicomically, human professors must waste their days reading and grading the machine-generated guff. Some may already spend longer grading essays than the students take to write them.
So what is the solution? We could go back to handwritten exams, but the students would need to be re-introduced to the pen, and the papers still need to be organised, invigilated and graded. This feels like a backward step. We could introduce a face-to-face assessment but the logistics are prohibitive: a typical school would need to schedule tens of thousands of hours of faculty time, and the opportunities for bias would be legion. You would ironically be replacing the essay with something less objective, more complicated to organise, more expensive, and no doubt more hated.
It has even been put to me that you could try to build an AI grading bot to mark the essays. But what would be the point of that? The student pays one AI to dig the hole, while the professor pays another AI to fill it in again? That would be doubling down on the dissimulation, not the education.
So here’s another idea. Why not ask the AI to do the face-to-face assessment? Let’s call this an AI Viva, by analogy with the verbal (viva voce) thesis defence traditionally reserved for PhD students. Here is how I think an AI viva could work: students continue to be taught in class, with as much interaction with the professor as possible; they can use AI as much as they like during the course, using it as a personal teaching assistant (a role it already plays very well). But at the end of the course, the student must meet with an AI examiner - without notes or help - and show what they have learned.
This last part is the step which hasn’t happened yet, although there are some interesting early pilots. And no - I’m not imaging that the examiner is a humanoid robot, the picture (above) is just to get us thinking. In reality, the students would arrive at a test centre ready for an exam style interview over chat. They would open up a restricted screen, to ensure they can’t access other apps, and then the AI would ask questions, probing the candidate for knowledge and understanding of the syllabus.
So each student’s AI Viva could be different, the conversation can go in unique directions, but the mark scheme would be the same for every candidate. The point is that the AI assesses each conversation objectively and provides a transcript, grading rationale and immediate feedback. And in this format other traditional exam problems also drop away: no cheating, copying, plagiarism, favouritism or ‘having a bad day’ (since students could keep taking the AI Viva until their scores stop improving).
I think a system like this would also be welcomed by academics, freeing them from the chore of grading papers but allowing them to see their students progress. It might also be fairer for the students and cheaper to deliver – expanding access to education via new models we have not yet imagined. But best of all, a working AI Viva would set the AI Tutor free to do its work, removing the need to police the use of AI on campus – a fool’s errand anyway - and allowing everyone to make the most of what it offers.
So yes, the classical, Miltonian College Essay is on its way to the dustbin of history – but good riddance, I say, since what replaces it will be so much better.



In most Italian universities exams at the end of each course are oral, in person. (Also in schools tests are mostly oral.) Student numbers can be huge so the exam consists of a few questions at random on the topics covered and unlike the UK a student can repeat exams several times in the hope of getting a better grade. In comparison with the UK our Italian students were less good at writing but had excellent memories and the capacity to draw on them orally.
Hi Sam - really like this concept, and it’s something that’s been on my mind with some teaching work I have been doing and with my own kids at school. The AI Viva makes a lot of sense at the end of a course. I’m also wondering about what happens during. Of course you could say that is up to the AI tutor, but will that test progress of reasoning and synthesis in the same way that coursework and/or intermediate deliverables are meant to? Some people will self-manage when they know that an end-of-course Viva is coming, but many won’t. I’ve wondered about having students submit an AI chat thread as their work, with the assessment being around the quality of their reasoning in their prompts, and the value adding contribution that the student’s prompts made to the overall answer. (much as prorata.ai assesses the contribution that each source makes to an answer)