Huge surge in AI misconduct cases recorded at Edinburgh University

AI use now makes up for a third of academic misconduct cases at the university

The number of Edinburgh University students being ‘caught’ for AI use tripled between 2023/4 and 2024/5.

This follows general trends around the country, including over 342 cases of AI-related breaches being upheld at Abertay University in 2024/5.

An FOI (Freedom of Information Request) conducted by The Tab Edinburgh reveals that in 2022/3 only 19 out of 515 total academic misconduct cases involved the use of AI

In 2023/4 this figure rose to 78 out of 817 academic misconduct cases being included under the sub-category of AI use.

By the 2024/5 period, 245 of 795 proven cases involved AI use, an increase of around 20 per cent since 2023/4.

A jump from about 3 per cent of academic misconduct cases involving AI to over 30 per cent is shown from a comparison of the 2022/3 and 2024/5 data.

The university told The Tab Edinburgh: “The information may not represent all academic misconduct cases involving AI.

“This is because the use of the AI sub-category has become more consistent over time across the university.”

They added: “Some incidents may instead have been recorded under one of the main categories only, such as cheating or plagiarism.”

The university started recording information on academic misconduct cases involving AI in 2022/3. This is presumably due to the launch of free AI tools such as Chat GPT and Gemini.

Only around 1.6 per cent of all Edinburgh University students were involved in a proven case of academic misconduct in 2024/5. Despite this, the data shows a steady increase in the number of cases over the last few years.

Furthermore, the overall numbers of proven academic misconduct cases have jumped by 280 since 2022/3 despite falling slightly in 2024/5.

Academic misconduct is categorised by the university as: “Assessment offences, including making use of unfair means in any university assessment or assisting a student to make use of such unfair means.”

The Code of Student Conduct sets out procedures by which the university deals with allegations of unacceptable behaviour. Academic misconduct can be handled under this in serious cases.

The university has guidance for students and staff on the use of AI to ensure everyone understands how to use AI tools in accordance with good academic and working practices.

They revised their guidance for students on the use of generative AI tools in March 2026.

These guidelines aim to provide students with guidance on acceptable and unacceptable uses of generative and agentic AI in their studies.

The university says they recognise that developing skills in the responsible use of AI is important and will likely be significant for students future careers.

The guidelines say that presenting AI outputs as your own work, using an AI translator to convert your work into English and submitting an assessment which includes elements of AI text without acknowledgement are all unacceptable.

Additionally, using unacknowledged AI-generated images, audio or video along with mathematical reasoning or computer code is prohibited.

Alongside its prohibited uses, the university has listed ways students can use AI to support their studies.

These include brainstorming ideas, checking grammar, overcoming ‘writer’s block’, organising or summarising information and re-formatting sentences.

They also said these guidelines may vary from course to course and advise students to check their course-level guidance.

The university have warned of risks which may come with using generative AI including ‘cognitive offloading’ – essentially losing the ability to develop your own critical and analytical skills.

They also warn of generative AI models being inaccurate and biased. They encourage students to use the university’s own generative AI platform, ELM.

ELM was launched in early 2025 and has been marketed as an AI innovation platform “built by the education sector, for the education sector”.

Additionally, the university has advised students to cite AI when it has been used, to avoid any potential for academic misconduct.