Politics

This viral AI image of an 'explosion' near the Pentagon never happened

This viral AI image of an 'explosion' near the Pentagon never happened

Pentagon denies reports of explosion: Market dips after fake tweets

Benzinga / VideoElephant

The dangers of artificial intelligence (AI) spreading harmful misinformation were made clear once again on Monday, when local fire service officials were forced to confirm there had not been an ‘explosion’ at the Pentagon in Virginia – despite an AI-generated image appearing to suggest there had been.

The main image, shared by several fake accounts with blue tick ‘verification’, depicts black smoke billowing close to the Pentagon building, with another showing a distant photo of smoke next to the US defence department headquarters.

Stating that the reports are unfounded, the Arlington Fire and Emergency Medical Services Twitter account wrote: “[The Pentagon Force Protection Agency] and the ACFD [Arlington County Fire Department] are aware of a social media report circulating online about an explosion near the Pentagon.

“There is NO explosion or incident taking place at or near the Pentagon reservation, and there is no immediate danger or hazards to the public.”

Fortunately, while paid-for blue tick accounts have been promised greater promotion on Twitter, a search for ‘Pentagon’ on the social media network brings up a string of tweets from ‘unverified’ accounts debunking the AI image:

Stocks reportedly tanked following the fake image, and it isn’t the first time that’s happened, either. The pharmaceutical company Eli Lilly saw a sudden drop last year when an imposter tweeted “insulin is free”.

Sign up to our free Indy100 weekly newsletter

And if tricksters aren’t targeting the Pentagon with AI-generated imagery, then they’re using the software to create pictures of the Pope in a puffer jacket and former US President Donald Trump being arrested.

In fact, it was only last month that Alexandria Ocasio-Cortez, the New York representative, warned of “major potential harm” at the hands of fake AI images.

“Jokes aside, this is setting the stage for major potential harm when a natural disaster hits and no one knows what agencies, reporters, or outlets are real.

“Not long ago we had major flash floods. We had to mobilize trusted info fast to save lives. Today just made that harder,” she said.

It seems we’re there already…

Have your say in our news democracy. Click the upvote icon at the top of the page to help raise this article through the indy100 rankings.

The Conversation (0)