NUS prof lashes out at class for using ChatGPT in AI video: Did you fall for it too?
"AI (artificial intelligence) videos are scary."
Aubrey Lim, who works in the tourism industry, does not remember the account behind the first AI video she 'fell' for, but its contents are etched in her mind.
The video showed a cat at a generic train station in Japan, coaxed out of the station by a guard—catnip for cat lovers like herself. She didn't think twice about it, until she navigated to the comments and saw several users flagging it as AI-generated.
Upon realising what she had just watched was AI, the 38-year-old immediately indicated on Instagram that she was 'Not Interested' in seeing such content.
"These videos are getting way too realistic," Ms Lim lamented.
There's been much discussion over the malicious use of AI to create deepfakes. In some cases, the technology has been used to spread misinformation: Senior Minister Lee Hsien Loong, Prime Minister Lawrence Wong and former President Halimah Yacob have all been deepfaked in previous years.
Just last November, the police investigated deepfake nude photos of female students at the Singapore Sports School, created and shared by other students.
Less discussed is the proliferation of AI-generated videos depicting hyper-realistic, everyday scenarios, such as the cat videos Ms Lim watched.
"What's the point of doing this?" asked Ms Lim.
From cute cats to rage bait
While videos of cats boarding trains or gathering in a circle at a Yishun void deck may seem innocuous—wholesome, humorous, and cute—others are designed to provoke stronger reactions.

Recently, an AI-generated video purportedly showed a National University of Singapore professor ranting at a class for using ChatGPT.
A similar video posted by TikTok user @Attap.Kia, which "depicts" a professor lashing out at his students for relying on ChatGPT for their assignments, had garnered at least 6.3 million views, 214,600 likes, 8,356 comments and 22,200 saves.
The video, which was labelled as AI-generated by the account, evoked many emotional responses.
Some commenters voiced support for the lecturer, while others offered advice on how students can use AI to assist them in their schoolwork without over relying on it. Notably, an overwhelming number of comments were written in the Malay language, suggesting it's made waves beyond our shores.
"I did not expect the professor video to go viral, especially not in Malaysia," Attap Kia, who wished only to be known by their TikTok handle, told Stomp.
Yet, the content creator was not particularly surprised by the huge reactions. "When topics touch on issues close to people's hearts, it's natural for them to engage passionately," Attap Kia explained.
Billy Heng, 31, who works in video creation and social media, affirmed this view.
In October, Mr Heng posted a video about getting rage baited by an AI clip showing a woman ranting that most hawker centres are "selling the same food". In his video, he explained how he was angered by the woman's "bad take" and had wanted to "correct" it.

He soon realised that the account, @The HDB Life, had marked it as AI-generated content.
Many AI content creators clearly label their videos. Some even retain the watermark from the AI software used, making it obvious the footage isn't real.
When asked why he and others continue to 'fall prey' to such videos, Mr Heng told Stomp: "People keep falling for them as the tag at the bottom of the video is quite hard to notice when mindlessly scrolling, coupled with the realism in visuals and voice tonalities."
Apart from where it's located on one's screen, Dr Elmie Nekmat, Associate Professor at the National University of Singapore also suggested that the term 'AI-generated' itself is "ambiguous".
Dr Elmie explained that the term can mean anything from AI-editing and AI-stylising to fully AI-created and AI-posted content. This ambiguity often leads viewers to ignore the label rather than pausing to understand which parts were AI-produced.
Both Mr Heng and Ms Lim shared that they are now more cautious when engaging with AI content. Mr Heng goes further, saying he would "never" engage with such content, as interacting with a robot serves "no purpose" to his "existence".

Just for fun?
Stomp reached out to several AI content creators, including The HDB Life, @elonmuskgives and @daily_mrt_. Attap Kia was the only creator who responded.
"I've always wanted to explore content creation, and AI tools have made that much more accessible," shared Attap Kia.
The content creator sees their videos as a way to "move beyond realism into short fictional stories or micro-dramas" and professes "(an interest) in using these tools to tell stories that reflect life in Singapore and the issues people care about".
To Attap Kia, these videos are satirical storytelling meant to entertain, with humour forming "the core" of their work.
"While social commentary naturally arises from the scenarios I create, I leave much of that discussion to the audience and social media engagement."
Attap Kia admitted that people sometimes mistake these videos for reality, but insists all content is clearly tagged as AI-generated.
"In some cases, I exaggerate elements or add captions to highlight that the content is inspired by real issues but not actual footage," Attap Kia said.
Mr Heng remains baffled by why creators would "create such nonsense".
"I cannot wrap my head around why someone would willingly choose to generate 5 to 10 videos a day for essentially no use to society," the 31-year-old said.
Where the law draws the line
AI videos like Attap Kia's may not appear to break the law, but that doesn't absolve creators of legal responsibility.
Zhen Guang Lam, lawyer at Clyde & Co, told Stomp: "If a video makes it look like a real person said or did something they never did, that could amount to defamation."
The lawyer added that the Defamation Act, Copyright Act, and Protection from Online Falsehoods and Manipulation Act , as well as voluntary frameworks such as the Model AI Governance Framework, help manage AI content risks.
That said, gaps "arguably" exist.
Mr Lam pointed out there's no comprehensive legislation specifically targeting synthetic media or deepfakes, and questions on liability continue to arise when AI content is harmful.
In addition, governance frameworks provide guidance and aren't strictly legally binding, leaving enforceability and accountability "potentially uncertain".
The lawyer shared some tips for creators who produce AI-generated satire or commentary:
- Don't present false facts about real people
- Be careful with copyrighted material—get permission or use originals
- Don't make it look like someone endorses your work without consent
- Avoid harassment or doxxing
- Exercise extra caution during elections
- Label content as AI-generated, keep creative records, and follow privacy rules if using real people's data
Mr Lam warned that these measures don't eliminate all legal risks, but demonstrate "responsible intent" and "narrow exposure" under current laws.
Staying safe in the age of AI content
Mr Heng acknowledged AI videos as a form of "creative expression" but feels the line is crossed when videos "send certain harmful messages that are not true", such as those inflaming racial tensions or impersonating officials.
While he isn't a strong advocate for flagging content, he believes social media platforms should regulate AI content so it doesn't "run wild".
Attap Kia agreed: "I support stronger labelling of AI-generated content to promote transparency and help audiences understand what they are watching."
However, Attap Kia cautioned that prominent watermarks may add "clutter" to videos, which can "interfere" with visual appeal and storytelling.
Omar Dapul, chief executive officer of Deepfaic, a Singapore-based company that develops tools for businesses to develop AI threats, says that AI has advanced to the point where some videos can't be detected by deepfake detection platforms, including Deepfaic.
"For motivated actors with enough resources, they can definitely generate these sorts of videos already at scale," said Mr Dapul.
According to him, it's "now much harder" to tell what's real and what's fake. He affirmed the need for regulations that penalise malicious AI use, citing Denmark's move to grant people copyright of their own likenesses as an example of how authorities can double down on deepfakes.
That said, when it comes to daily social media consumption, he advocates for individuals to adopt a "critical mindset" as their "best defence".
Social media users should understand that "the primary goal of social media is engagement", not necessarily truth. They should be mindful of what they consume and aware of their cognitive biases.
As Dr Elmie shared, "People are predisposed to biases... that tend to make them pay more attention to negative, threatening, emotional as well as strange and unconventional information."
"Content can thus be created using AI to exploit such biases to provoke a sense of fear, injustice, disgust, and outrage in viewers in more extreme ways."
Ultimately, he feels that such AI videos are here to stay, because the tools are cheap or free, social platforms reward high-arousal content, and creators now rely on AI to produce material quickly.
"It will not only grow, but might even become a core feature of the online media environment that all of us must be prepared to adapt to as fast and safely as possible."

