Podcast Episode
The fakes range from videos showing missiles striking Tel Aviv to fabricated satellite imagery of destroyed American military bases and images of captured soldiers, none of which depict real events. The volume far exceeds anything seen during previous conflicts, including the early days of the war in Ukraine.
AI-Generated Fakes Overwhelm Social Media During Iran War
March 15, 2026
0:00
3:13
A flood of AI-generated images and videos depicting fabricated scenes from the Iran war has swept across social media platforms, with researchers identifying over one hundred and ten unique fake posts amassing millions of views. Experts say the synthetic content serves as a strategic propaganda tool, while platforms struggle to moderate the unprecedented volume.
A New Era of Wartime Disinformation
The war in Iran has become the first major military conflict to face an industrial-scale onslaught of AI-generated misinformation. In just the first two weeks of fighting, researchers identified more than one hundred and ten unique AI-fabricated posts across X, TikTok, and Facebook, collectively racking up millions of views.The fakes range from videos showing missiles striking Tel Aviv to fabricated satellite imagery of destroyed American military bases and images of captured soldiers, none of which depict real events. The volume far exceeds anything seen during previous conflicts, including the early days of the war in Ukraine.
Strategic Propaganda Tool
Experts say much of the content promotes pro-Iranian narratives, depicting exaggerated destruction in Gulf states and Israel to suggest the conflict is more costly for the United States and its allies than reality reflects. Analysts at the Brookings Institution have described the AI fakes as a deliberate tool of war that Tehran is actively exploiting.Platforms Under Pressure
Social media companies are struggling to respond. An investigation found that Grok, the AI chatbot on X, has not only failed to detect fakes but has actively generated its own synthetic war imagery when asked to verify content. X announced that creators posting unlabelled AI-generated conflict videos will face a ninety-day suspension from its revenue-sharing programme. Meanwhile, Meta's Oversight Board called current labelling methods insufficient for handling the scale and speed of AI-generated misinformation during crises.Detection Challenges
Fact-checkers have identified common markers including garbled text, impossibly symmetrical blast patterns, and gibberish coordinates on satellite images. Tools like SynthID and Hive Moderation have helped flag content, but experts warn that generation technology has reached a point where most people cannot distinguish fakes from reality.Published March 15, 2026 at 2:12am