University of Stirling student used artificial intelligence to stalk McDonald’s employee
‘I didn’t do anything which I needed to apologise for. I only sent her two messages’
A student from the University of Stirling used artificial intelligence to stalk a McDonald’s employee.
The 27-year old Master’s student from Rutherglen, Lanarkshire, has been found guilty of stalking Caitlin Smith, utilising artificial intelligence to assist his behaviour.
Farhan Ali, who studies business management and is also a Just Eat driver, tracked the victim between 3rd February and 23rd February 2024, leveraging modern communication tools and even AI-generated messages to pursue her.
Glasgow Sheriff Court heard how Ali initially approached Caitlin late at night in the restaurant’s car park, asking for her contact details whilst she was on shift. After being ignored and blocked by Caitlin on social media platforms such as Snapchat and Instagram, he persisted by visiting the McDonald’s location and ordering a milkshake from her.
Despite apologising via Instagram, Ali claimed he didn’t believe he had done anything wrong. Remarkably, he told the court that the apology was suggested by ChatGPT, which he used after inputting details of his interactions with the victim. Ali’s use of AI in this context raises ethical concerns regarding the influence and misuse of emerging technologies in personal and legal situations.
According to the Daily Mail, CCTV footage presented during the trial showed Ali approaching Caitlin at 11pm whilst she was emptying bins outside the restaurant. In her testimony, the victim expressed fear, admitting that the defendant had told her he had been “watching her at work.”
Ali, however, disputed her account, claiming she smiled at him and later added him on Snapchat. He testified that he sent a polite “Thank you” message before following up with: “I would like to take you out for a coffee sometime, no pressure, just let me know if you are up for it.” Both messages were unread.
Most Read
Prosecutor Redmond Harris confronted Ali during his cross-examination, suggesting that he had attempted to “engineer” interactions with Caitlin, despite her clear disinterest. The defendant replied: “I cannot comment on that, I was working that day and just wanted to buy a milkshake for myself,” while maintaining that he did not intend to make any further contact.
The court learned that after being blocked on Snapchat and Instagram, Ali looked up Caitlin again and sent a third message, which read: “Hey, I hope you are well, I understand that my messages may make you feel uncomfortable.” He proceeded to ask for a meeting once more.
During his defence, Ali explained that he used ChatGPT to draft the third message. He admitted to entering the specifics of his situation into the AI chatbot, which suggested an apology. He said: “I didn’t do anything which I needed to apologise for. I only sent her two messages prior to this and this was the third message.”
Prompting, Mr Harris to ask: “You did apologise, are you saying that ChatGPT told you to?”
He replied in his testimony: “That was the message suggested by ChatGPT, and I thought it would be better if I do this.”
Whilst AI-generated content is typically neutral, the court questioned Ali’s usage of technology in what was deemed an invasive act towards the victim.
The trial also heard allegations that he followed Caitlin in his car, which he denied, citing insurance issues that prevented him from driving at the time.
Sheriff Anthony Deutsch deferred sentencing until next month, pending background reports.
Feature image via Google Maps and Unsplash