Concerns as neurodivergent students unfairly accused of AI usage in Uni of York assignments

Students say there needs to be more awareness of how these policies affect different groups

Concerns are growing among students at the University of York that neurodivergent learners are being unfairly accused of using AI in their work.

As universities crack down on tools like ChatGPT, new detection methods and stricter policies are being introduced. But some students say these systems don’t take into account how differently people naturally write.

Detection methods may flag certain writing styles as “suspicious”, as students say the experience can be stressful and damaging.

Why some students are being flagged

AI detection, whether formal software or informal marking concerns, often focuses on patterns in writing such as sentence structure, vocabulary and consistency of tone.

These tools and judgements can sometimes flag work as suspicious if it appears overly structured, unusually formal, or inconsistent in style.

However, many students say this can overlap with how some neurodivergent people write.

One student, Billie* explained how he felt his writing style may have contributed to the concern, as he often edits his work heavily to make it sound more academic.

Differences in language use, tone shifts, or attempts to sound more academic can all be misinterpreted as signs of AI use, even when the work is entirely original.

Concerns over how writing is judged

Some students say the possibility of being flagged has changed how they approach assignments.

Billie said the experience left him feeling anxious about submitting future work, as he became more aware of how his writing might be perceived.

One student told Nouse that concerns around AI detection have made them more aware, and sometimes more anxious, about how their work might be perceived.

There is also a sense that certain writing styles are seen as safer than others, even if they don’t reflect how students would naturally express their ideas.

For neurodivergent students in particular, this can create additional pressure when completing coursework.

A wider rise in AI concerns

The issue comes as universities across the UK report a rise in AI-related academic misconduct cases.

Institutions are still developing how they handle AI use in assessments, with policies evolving as technology becomes more widely used.

While universities emphasise the importance of academic integrity, students say there needs to be more awareness of how these policies affect different groups.

A spokesperson for the University of York said investigations into academic misconduct are not based on AI detection tools alone.

They said: “Investigations are a human-led process, which involve looking at multiple factors, including a discussion with the student about their research and writing process.”

The university also said it does not rely solely on AI detection software.

“Current AI detection tools are not considered to be fully reliable and the university wouldn’t use a report from one as the only piece of evidence in an investigation into academic misconduct,” the spokesperson said.

Concerns about fairness

As AI continues to shape how students work and how universities assess that work, the question remains how to balance academic integrity with recognising different ways of thinking, writing and learning.

There are growing calls for universities to involve neurodivergent and disabled students when developing assessment policies, to make sure they are fair and inclusive.