A quarter of Russell Group unis are investigating students for using ChatGPT in assessments

York has already found five students guilty

| UPDATED

When Olivia is writing an essay, rather than getting her flatmate to check over it or sending it to her parents, she turns to ChatGPT.

“If I’d written a paragraph for example, I would use it to improve a sentence I’d already written,” the second year University of Manchester student said. “I’ve definitely used it for making something sound better or taking certain sentences from it.”

University campuses have been rocked by the introduction of the free chatbot. In March, The Tab revealed students and staff visited ChatGPT’s website more than a million times in the first two months since it launched.

Today, figures obtained by The Tab show a quarter of Russell Group universities have launched investigations into the suspected use of AI chatbots by their students.

Replies to freedom of information requests from 23 of the 24 Russell Group institutions uncover six prestigious universities have investigated their students and at least three have so far found students guilty of the academic offence.

The University of York wrote to students in March to make it clear that “use of generative AI in current assessments could constitute academic misconduct”.

However since December 2022, it has investigated 20 students for the suspected use of generative AI in coursework, essays and exams and so far upheld five investigations. Many investigations are likely ongoing.

The university said the figures show “our guidance has been effective in helping identify, monitor, and review cases that staff and students are concerned about in relation to this technology”.

A spokesperson added: “We also think it’s important, however, to harness advances in artificial intelligence in order to explore ways in which it can be used appropriately to make learning experiences more rewarding and relevant.”

While the university may want to develop “appropriate use”, it has not held back in disciplining students who have been caught using AI chatbots to plagiarise in their assessments.

While one student was giving a mark 0f 29 (the university pass mark is 40), another was dealt a bigger blow to to their overall average grade when they were given a mark of zero.

One student was also told they had to attend an academic integrity tutorial.

Another university which has chosen to make an example of ChatGPT cheaters is Glasgow.

The university has so far investigated four students and upheld all four investigations. It said  decisions are made on the balance of probabilities as well as further evidence submitted by staff, or if the student admits to it.

One student was given a “Grade H” by the university – the lowest possible grade. The university describes a piece of work which has received this grade as having “no convincing evidence of attainment of ILOs (intended learning outcomes)”. To rub salt into the wound, the student was given no opportunity to resit the assessment.

Dylan is in her final year of study at the University of Glasgow. She admitted: “I definitely know a lot of people using it. Most of the time everyone’s tried it at least once or twice for various things.”

However she said the “stigma” around cheating at the top Scottish university meant: “I don’t actually know the extent to which people are using it to cheat.”

Newcastle and Leeds also revealed it’s launched cases against students for using AI chatbots however both unis said the total number of investigations and those which have been upheld is less than five. To protect the anonymity of those students, it would not reveal the precise figure.

In London, both LSE and UCL have acted on their suspicions. The London School of Economics would not reveal the precise figure as it said the number was less than five.

UCL meanwhile explained while it has investigated students, it is yet to uphold any investigation because “it could not be determined whether AI was used on a balance of probabilities basis”.

Universities are struggling to prove a student has used ChatGPT in an assessment with the existing AI detection software not providing conclusive enough evidence. Numerous Russell Group universities have expressed doubts over Turnitin’s AI detection software, which was launched at the start of April and promised to “flip the switch”.

Dr Andres Guadamuz, a reader in intellectual property law at the University of Sussex, told The Tab: “I can’t afford for it to be wrong as a marker, I don’t feel confident in accusing someone or giving someone a mark which potentially can influence someone’s life.”

His thoughts were echoed by Dr Richard Harvey, a professor of computer science at the University of East Anglia. He explained: “When you are doing plagiarism cases, you’ve always got to balance the cost of a false accusation against the catching of everyone.

“Once you start thinking about it deeply you realise you are going to have to let some people get away with it in order to not destroy a number of people who are innocent just as the legal system does.”

Olivia and Dylan’s names have been changed for to protect their identity.

Related stories recommended by this writer:

Russell Group universities to teach students how to use ChatGPT as part of their studies

We asked ChatGPT to plan the perfect Cardiff student night out

I got ChatGPT to write my essay to see if I could get a 2:1 from a Russell Group uni