Science & Tech

Sora - all the warnings from experts about new video AI generator

Sora - all the warnings from experts about new video AI generator

OpenAI - YouTube

Experts have issued a warning against OpenAI’s video generator Sora suggesting it has the potential to cause a massive security risk.

Artificial intelligence is one of the most sophisticated and arguably frightening technologies to emerge in recent years, potentially putting jobs and university degrees at risk because of its insane capabilities.

Sora is a text-to-video tool designed by AI company OpenAI that allows users to type in a text prompt and have video generated by the technology.

However experts have warned its realism is a security concern due to the potential of creating quality deepfakes (manipulated videos that appear to be someone else) which can easily deceive people.

The Sora video-generation tool is able to generate videos up to 60 seconds long, based solely on prompts given by the user. Prompts can be text-based, or text combined with an image.

Sora can generate videos with lots of mistakes, such as a cat with three front legs, but experts warn it is driving us closer to a world in which it is difficult to differentiate real from fake.

Arvind Narayanan at Princeton University cautioned, in the long run, “we will need to find other ways to adapt as a society”.

As another expert pointed out, the prevalence of AI-generated content online puts an additional burden on people to determine what’s fact and fiction.

Tony Elkins, a writer at Poynter and a founding member of the News Product Alliance, said: “Now we have to ask what’s real. We have to do that for photos. We have to do that for text. We’re gonna have to do it for videos. And it creates so much responsibility on the consumer that was never there before.”

- YouTubewww.youtube.com

On its website, Sora creator OpenAI touched on the issue of safety regarding Sora.

The company website says: “We’ll be taking several important safety steps ahead of making Sora available in OpenAI’s products. We are working with red teamers — domain experts in areas like misinformation, hateful content and bias — who will be adversarially testing the model.”

It also says it plans to use some of the same safety features that are currently in place for its image generator, named Dall-E, which are in place to make sure text prompts don’t break rules against “extreme violence, sexual content, hateful imagery, celebrity likeness, or the IP of others”.

However, some in the industry have pointed out that AI is trained on human data, meaning it will likely have the same biases human beings have.

“There are biases in society and those biases will be reflected in these systems,” Kristian Hammond, a professor of computer science at Northwestern University, said.

Sign up for our free indy100 weekly newsletter

How to join the indy100's free WhatsApp channel

Have your say in our news democracy. Click the upvote icon at the top of the page to help raise this article through the indy100 rankings

The Conversation (0)