First known student caught using ChatGPT at UK university – here’s how they were exposed
Rule one: Don’t cite a book about leadership in witches and wizards
A university student has been caught using ChatGPT by their university after they referenced a leadership book about witches and wizards in their essay.
The student is the first known university student in the country to be caught using the AI-chatbot to cheat in an assessment.
University documents, obtained by The Tab via a Freedom of Information request, show as well as citing a wizarding leadership book, the student also referenced a dissertation from 1952 and a journal article from 1957.
Rather unsurprisingly, the University of Bolton student failed the essay and was forced to resit the assessment after getting caught by the university’s standards and enhancement office.
The university outlined four reasons as to how they worked out the student had used ChatGPT.
Tasked with writing an essay about leadership theories and their practices in the real world, the standard and enhancement office said the student’s “style of writing” differed between the theory and the practice sections of the essay.
The investigation also said: “A number of statements in section links and in the practice section do not make sense.”
As well as sections not making sense, the essay only referenced two journals which were available to students on Moodle – the university’s integrated system where students access all their course materials. There were no sources used from the unit’s reading list at all.
Instead, the student used what the investigating staff described as “obscure references”.
One such reference was a largely unknown PhD student’s dissertation written 71 years ago, Edwin Francis Harris’ Measuring industrial leadership and its implications for training supervisors, which he wrote whilst at Ohio State University in 1952.
A journal article from 1957, Howard Baumgartel’s Leadership style as a variable in research administration, was also cited as a reference in the essay.
Alongside the references from the 1950s was a reference to Aditya Simha’s book, Leadership Insights for Wizards and Witches: Exploring Effective Leadership Practices through Popular Culture.
The book, which came out in May last year and could be yours for just £20 from Amazon, is described as “outlining various leadership styles, theories, and concepts through the imaginative lens of J.K. Rowling’s magical world”.
The 152-page book “combines the immersive and enchanting context” of Harry Potter with the “scholarly discipline of leadership”. Expect illuminating sections on house elves, the personality of Gilderoy Lockhart and Voldemort’s horcruxes.
ChatGPT has limited knowledge after September 2021 and as such would not be able to reference Simha’s Harry Potter book. However as of March, OpenAI’s far more advanced, GPT-4 has been available to students willing to pay $20 (£16) a month to subscribe to ChatGPT Plus.
The more advanced Chatbot is able to cite more recent information and would be able to recall Simha’s Leadership Insights for Wizards and Witches.
University of Bolton’s standards and enhancement office investigation found the student had bought the essay from an essay-writer and concluded the “essay-writer had used ChatGPT in places”.
There are certainly parallels between this student’s essay and The Tab’s own experiment using ChatGPT to submit a university essay to a Russell Group university.
That essay equally struggled with referencing and stood out to the lecturer as being “fishy” as it failed to include in-text referencing and some of the references were considered too old.
The professor admitted lecturers don’t check every student’s references and so “if you had sneaked in some which seemed plausible”, he would have been likely not to flag the essay as being suspicious and would have been close to awarding the work a 2:1.
In response to The Tab’s Freedom of Information request, the University of Bolton said it has so far concluded two serious academic misconduct investigations into suspected use of ChatGPT among students, although admitted: “We are aware there are a number of cases which are currently being investigated and the paperwork has not yet reached us.”
In relation to the other investigation, the university found plagiarism detection software, Turnitin, had “wrongly flagged” the student’s work suggesting it was AI generated in places.
The piece was in fact all the student’s original work and so the investigation was dismissed.
The case of mistaken identity comes as universities continue to question the reliability of Turnitin’s AI detection software.
Turnitin, which is used by more than 10,000 higher education institutions around the world, is used by almost every university in the UK. It works by providing a similarity report of the student’s work by comparing the submitted assessment against Turnitin’s database which is made up of billions of web pages, academic journals, and every previous essay submitted using Turnitin.
This similarity report then shows universities to what extent the phrases and sentences used by the student have previously appeared elsewhere. The problem AI chatbots such as ChatGPT pose is that they provide unique answers to the individual user each time.
At the start of April, Turnitin attempted to address those concerns by launching bespoke detection software which claimed it could show how much of an essay was produced with AI.
The company said it had been working on the software for two years, boasted it could identify AI-generated text with 98 per cent confidence and even sent out a press release saying the “switch has been flipped” on AI detection.
Following the software having wrongly identified a University of Bolton student as using AI in places, a spokesperson for Turnitin provided a more-reserved statement about the software’s capabilities to The Tab.
“Indicators flagging the potential presence of AI-generated text should never be used as the sole basis for assuming misconduct. They should be used to initiate further discussion and, at times, narrow the focus of inquiry between an educator and their student around the submission.
“Turnitin’s AI writing detection capability does not make a determination of misconduct, rather it provides data for the educators to make an informed decision based on their academic and institutional policies, knowledge of their students and the parameters of the assessment. In all cases, the final decision on whether misconduct has occurred rests with the reviewer/educator who knows their student and their work best.
“It is also important to note that AI writing detection focuses on identifying the statistical signatures of AI writing tools beyond just ChatGPT, including some translators and paraphrasing tools that could be permissible under the parameters of the assessment. In those cases, the discretion of the reviewer/educator and their policies on the use of AI tools is critical to the discussion.”