The dangers of artificial intelligence (AI) spreading harmful misinformation were made clear once again on Monday, when local fire service officials were forced to confirm there had not been an ‘explosion’ at the Pentagon in Virginia – despite an AI-generated image appearing to suggest there had been.
The main image, shared by several fake accounts with blue tick ‘verification’, depicts black smoke billowing close to the Pentagon building, with another showing a distant photo of smoke next to the US defence department headquarters.
Stating that the reports are unfounded, the Arlington Fire and Emergency Medical Services Twitter account wrote: “[The Pentagon Force Protection Agency] and the ACFD [Arlington County Fire Department] are aware of a social media report circulating online about an explosion near the Pentagon.
“There is NO explosion or incident taking place at or near the Pentagon reservation, and there is no immediate danger or hazards to the public.”
\u201c@PFPAOfficial and the ACFD are aware of a social media report circulating online about an explosion near the Pentagon. There is NO explosion or incident taking place at or near the Pentagon reservation, and there is no immediate danger or hazards to the public.\u201d— Arlington Fire & EMS (@Arlington Fire & EMS) 1684765651
Fortunately, while paid-for blue tick accounts have been promised greater promotion on Twitter, a search for ‘Pentagon’ on the social media network brings up a string of tweets from ‘unverified’ accounts debunking the AI image:
\u201cPrime example of the dangers in the pay-to-verify system: This account, which tweeted a (very likely AI-generated) photo of a (fake) story about an explosion at the Pentagon, looks at first glance like a legit Bloomberg news feed.\u201d— Andy Campbell (@Andy Campbell) 1684765878
\u201cConfident that this picture claiming to show an "explosion near the pentagon" is AI generated. \n\nCheck out the frontage of the building, and the way the fence melds into the crowd barriers. There's also no other images, videos or people posting as first hand witnesses.\u201d— Nick Waters (@Nick Waters) 1684765198
\u201cThis morning, an AI generated image of an explosion at the US Pentagon surfaced.\n\nWith multiple news sources reporting it as real, the S&P 500 fell 30 points in minutes.\n\nThis resulted in a $500 billion market cap swing on a fake image.\n\nIt then rebounded once the image was\u2026\u201d— The Kobeissi Letter (@The Kobeissi Letter) 1684768155
\u201cSo @republic aired a 'Live & Breaking' news of Pentagon explosion image. They even invited Prof. Madhav Nalapat "strategic expert" to discuss about the explosion. \nBWT, It was an AI generated image.\u201d— Mohammed Zubair (@Mohammed Zubair) 1684771495
\u201cThis morning blue check accounts accounts claimed large explosions at the Pentagon.\n\n... then the White House.\n\nRussian state media amplified the faked Pentagon image from their gold check account.\n\nThe images look AI generated, as folks identified. 1/\u201d— John Scott-Railton (@John Scott-Railton) 1684771879
\u201cI'm sure if there was a large explosion near the White House, "Altcoin Gordon" would be the first person to tell you about it. \n\nThis is an AI-generated fake, much like the one earlier about the Pentagon.\u201d— Shayan Sardarizadeh (@Shayan Sardarizadeh) 1684767097
\u201cA fake Bloomberg account with a Verified Blue Check on Twitter posted an apparently AI-generated picture of an explosion at the Pentagon this morning.\n\nThere was no explosion at the Pentagon, but the stock market still dipped.\nhttps://t.co/RCWKQwjgU1\u201d— Ben Collins (@Ben Collins) 1684768681
Stocks reportedly tanked following the fake image, and it isn’t the first time that’s happened, either. The pharmaceutical company Eli Lilly saw a sudden drop last year when an imposter tweeted “insulin is free”.
Sign up to our free Indy100 weekly newsletter
And if tricksters aren’t targeting the Pentagon with AI-generated imagery, then they’re using the software to create pictures of the Pope in a puffer jacket and former US President Donald Trump being arrested.
In fact, it was only last month that Alexandria Ocasio-Cortez, the New York representative, warned of “major potential harm” at the hands of fake AI images.
“Jokes aside, this is setting the stage for major potential harm when a natural disaster hits and no one knows what agencies, reporters, or outlets are real.
“Not long ago we had major flash floods. We had to mobilize trusted info fast to save lives. Today just made that harder,” she said.
It seems we’re there already…
Have your say in our news democracy. Click the upvote icon at the top of the page to help raise this article through the indy100 rankings.