Thursday, November 27, 2025 | 05:01 PM ISTहिंदी में पढें
Business Standard
Notification Icon
userprofile IconSearch

OpenAI's Sora makes disinformation extremely easy, real with AI clips

The app, called Sora, requires just a text prompt to create almost any footage a user can dream up

Open AI

OpenAI has said it released the app after extensive safety testing, and experts noted that the company had made an effort to include guardrails. (Reuters)

NYT
Tiffany Hsu, Stuart A Thompson &  Steven Lee Myers
 
In its first three days, users of a new app from OpenAI deployed artificial intelligence to create strikingly realistic videos of ballot fraud, immigration arrests, protests, crimes and attacks on city streets — none of which took place. 
The app, called Sora, requires just a text prompt to create almost any footage a user can dream up. Users can also upload images of themselves, allowing their likeness and voice to become incorporated into imaginary scenes.  
Sora — as well as Google’s Veo 3 and other tools like it — could become increasingly fertile breeding grounds for disinformation and abuse, experts said. While worries about AI’s ability to enable misleading content and outright fabrications have risen steadily in recent years, Sora’s advances underscore just how much easier such content is to produce, and how much more convincing it is. 
 
OpenAI has said it released the app after extensive safety testing, and experts noted that the company had made an effort to include guardrails. 
In tests by The New York Times, the app refused to generate imagery of famous people who had not given their permission and declined prompts that asked for graphic violence. It also denied some prompts asking for political content. 
The safeguards, however, were not foolproof. 
Sora, which is currently accessible only through an invitation from an existing user, does not require users to verify their accounts — meaning they may be able to sign up with a name and profile image that is not theirs.  The app will generate content involving children without issue, as well as content featuring long-dead public figures such as the Rev. Dr Martin Luther King Jr and Michael Jackson. 
The app would not produce videos of President Trump or other world leaders. But when asked to create a political rally with attendees wearing “blue and holding signs about rights and freedoms,” Sora produced a video featuring the unmistakable voice of former President Barack Obama.  ALSO READ: OpenAI launches Sora app comparable to Meta Vibes: What is it, How it works 
Until recently, videos were reasonably reliable as evidence of actual events, even after it became easy to edit photographs and text in realistic ways. Sora’s high-quality video, however, raises the risk that viewers will lose all trust in what they see, experts said. Sora videos feature a moving watermark identifying them as AI creations, but experts said such marks could be edited out with some effort. 
“It was somewhat hard to fake, and now that final bastion is dying,” said Lucas Hansen, a founder of CivAI, a nonprofit that studies the abilities and dangers of artificial intelligence. “There is almost no digital content that can be used to prove that anything in particular happened.” 
Such an effect is known as the liar’s dividend: That increasingly high-caliber AI videos will allow people to dismiss authentic content as fake. 
Imagery presented in a fast-moving scroll, as it is on Sora, is conducive to quick impressions but not rigorous fact-checking, experts said. They said the app was capable of generating videos that could spread propaganda and present sham evidence that lent credence to conspiracy theories, implicated innocent people in crimes or inflamed volatile situations.
Although the app refused to create images of violence, it willingly depicted convenience store robberies and home intrusions captured on doorbell cameras. Fake and outdated footage has circulated on social media in all recent wars, but the app raises the prospect that such content could be tailor-made and delivered by perceptive algorithms to receptive audiences. 
 
©2023 The New York Times News Service
 
“Now I’m getting really, really great videos that reinforce my beliefs, even though they’re false, but you’re never going to see them because they were never delivered to you,” said Kristian J. Hammond, a professor who runs the Center for Advancing Safety of Machine Intelligence at Northwestern University. “The whole notion of separated, balkanized realities, we already have, but this just amplifies it.” 
Dr. Farid, the Berkeley professor, said Sora was “part of a continuum” that had only accelerated since Google unveiled its Veo 3 video generator in May. 
Even he, an expert whose company is devoted to spotting fabricated images, now struggles at first glance to distinguish real from fake, Dr. Farid said. 
“A year ago, more or less, when I would look at it, I would know, and then I would run my analysis to confirm my visual analysis,” he said. “And I could do that because I look at these things all day long and I sort of knew where the artifacts were. I can’t do that anymore.”
 
Disclaimer: These are personal views of the writer. They do not necessarily reflect the opinion of www.business-standard.com or the Business Standard newspaper

Don't miss the most important news and views of the day. Get them on our Telegram channel

First Published: Oct 03 2025 | 10:24 PM IST

Explore News