Revealed: These are the universities where the most students are using ChatGPT to cheat
Almost 400 students have been investigated so far
Only one window away on your laptop, the ability to use ChatGPT to quickly sharpen up your essay and tweak a few paragraphs is theoretically very easy, but it’s another thing actually committing to the cardinal sin and cheating in your exams.
The Tab sent more than 130 freedom of information requests to every university in the country to figure out just how many students have taken the plunge and used ChatGPT to cheat.
The results, from the 115 unis that got back to us, show more than 40 per cent of universities have investigated their students for using ChatGPT or a similar AI bot in an assessed piece of work.
Students at the University of Kent have turned to AI bots the most to help them get through their essays and exams.
The Canterbury-based uni has investigated 47 students for suspected use of ChatGPT since the software was released at the start of December.
So far 22 Kent students have been found guilty after they were reported to a plagiarism panel. However there are no signs of the cheating stopping any time soon, there are still 12 investigations that are ongoing.
The uni has been quick not to lay the blame at the door of its students. A spokesperson praised the uni’s own “AI guidance and training” which has “enabled us to identify early misuse of the technology”.
Just behind the University of Kent, Birkbeck, University of London has investigated 41 of its students but has so far only upheld less than five investigations.
However the uni says this is because ChatGPT is still “new technology”, adding “most of these investigations are still open”.
Leeds Beckett and De Montfort make up the unis with the third and fourth highest number of investigations into AI chatbot cheating.
Both unis are opening new investigations against their students at rate of approximately one a week.
The UK universities where most students are cheating using ChatGPT:
A spokesperson for the University of Kent said: We are pleased that our AI guidance and training for staff has enabled them to identify early misuse of the technology. Alongside guidance for staff, we have provided our students with training webinars and guidelines on how to use AI in their studies, including a reminder that presenting AI-generated text or images as their own work constitutes a form of plagiarism, which in itself is a form of academic misconduct.
“Our guidelines include how to use AI responsibly and ethically; when it is appropriate to use AI; when it is not appropriate to use AI; and when it is not recommended to use AI. We will continue to review and monitor the use of AI in education and assessment and update our academic misconduct policies and guidance for students accordingly. We will also update our staff guidance as the technology develops.
A Leeds Beckett University spokesperson said: Like other universities, we have been dealing with the rapidly developing situation regarding generative AI tools. Our approach has been to advise students of the risks where unattributed sources are used to complete assignments, and to reiterate good academic practice. We have developed guidance for staff and students, which will remain under review as the AI tools, themselves, become better understood.