Cambridge University approves new rules for educational use of AI

Cambridge agreed to the new AI rules on 4th July, along with 23 other Russell Group Universities


The University of Cambridge recently signed its agreement to a list of “guiding principles” for the use of AI software in higher education, alongside 23 other Russell Group universities.

The guidelines, published on 4th July, seek to “shape institution and course-level work” and encourage “ethical and responsible use of generative AI.”

In the eyes of the rules, adapting to using AI is “no different” to universities continually shifting their teaching and assessment practices in order to adapt to  “new research, technological developments and workforce needs.”

However, Cambridge University, with support of the rules, will maintain its stance to preserve its “academic rigour” and will work alongside other Russell Group unis to ensure its most suitable application as the new technology evolves.

“It’s in everyone’s interests that AI choices in education are taken on the basis of clearly understood values,” Chief Executive of the Russell Group, Dr Tim Bradshaw, said in the statement.

This announcement arrives shortly after the UK Government opened a consultation on the use of AI in English education.

It also addresses previously raised concerns at the university surrounding its place in higher education and assessments, due to risk of cheating.

A few months ago, The Department of Modern and Medieval Languages and Linguistics warned students that the use of AI platforms such as ChatGPT are considered “forms of academic misconduct” and could lead to sanctions under the University’s disciplinary procedures

In the email sent to students in May, Jemma Jones, deputy faculty manager, wrote, while it understood chatbots like ChatGPT were “new tools being used across the world”, the department urged students to be “very wary.”

She also accused the accuracy of their content to be “questionable” and invited students to be wary of the ethical concerns it has regarding the protection of privacy.

A screenshot of an email sent to students on 5th May

These concerns are reflected in these recently published principles where it discusses “privacy and data concerns”, “potential for bias” and “ethics codes”, to name a few.

Although the technology “has not been banned”, students must still “be the authors of their own work.” If this is not the case, the university has stated that students risk being investigated for academic misconduct.

As of a month ago, The Tab revealed that The University of Cambridge had not yet investigated a single student for using ChatGPT or similar generative AI to cheat in exams.

This is despite the fact that a Tab-conducted survey of 540 Cambridge students found almost half (49 per cent) had admitted to using the AI chatbot to help complete work for their degree.

Previous to this Russell Group statement, Cambridge’s Pro-Vice-Chancellor for education, Prof. Bhaskar Vira, shared a similar view when he told Varsity that such bans on the technology are not “sensible.”

He stated that we ought to recognise AI as “a new tool that is available.” But, we must also “adapt our learning, teaching and examination processes so that we can continue to have integrity” when we apply the software to education.

“Adaptation” and “integrity” were just two of many themes discussed in the statement released by the Russell Group organisation.

Featured image credit: Logan Green