The image is stark: dozens of freshly dug graves, aligned in neat rows, with more marked out for immediate excavation. This devastating scene, captured in the cemetery of Minab, Iran, depicts the preparations for burying over 100 young girls—a brutal testament to the civilian toll of the US-Israeli war on Iran. A powerful, authentic image that underscores immense human suffering.
So, when you ask leading AI chatbots like Google's Gemini or X's Grok about this image, what do they say? Prepare for a chilling encounter with "AI slop"—a deluge of hallucinated facts, nonsensical analysis, and outright fabrications that are increasingly engulfing global news, particularly coverage of major conflicts.
AI Chatbots Spreading Dangerous Misinformation About Global Conflicts
Ask Gemini about the Minab graves, and it confidently asserts the photograph is from a Turkish earthquake in 2023, taken over 2,000km away. Its "specific aerial perspective became one of the most widely shared images of the disaster," Gemini states, illustrating "the sheer scale of the loss." Conveniently, it's completely wrong.
X's Grok offers a different, equally false narrative. It assures users the image is a 2021 stock photo from a COVID mass burial site in Jakarta, Indonesia. Not Minab, not war, just a "stock photo." Both AIs present their findings with an air of absolute certainty, even providing what they claim are sources. But follow those digital breadcrumbs, and you hit dead ends—non-existent reports or irrelevant links. For all their algorithmic swagger, these AIs are simply, fundamentally incorrect.
Why This AI-Generated "Slop" Matters for Reliable News
The Minab image, verified through satellite imagery and cross-referenced with dozens of other angles and video footage, is undeniably authentic. It shows no signs of tampering. Yet, the rapid-fire "fact-checks" from Gemini and Grok are not isolated incidents; they represent a growing wave of AI-generated misinformation that is actively undermining real investigative work and sowing doubt where truth is paramount.
Shayan Sardarizadeh, a senior journalist with the BBC Verify team, reports a dramatic shift: while early misinformation during conflicts like the Gaza or Ukraine wars often involved old or repurposed video game footage, now "nearly half, if not more, of all the viral falsehoods that we now track and debunk are generative AI." This isn't just about embarrassing errors; it's about a systematic assault on verifiable information.
Consider these chilling examples:
- A photo, presented by the Tehran Times as satellite imagery of a destroyed US radar in Qatar, was exposed as an AI fake. Its giveaway? Cars in identical positions to an old Google Earth image.
- Widely circulated images of Khamenei's body supposedly pulled from rubble featured telling "duplicate limbs" among rescuers.
- Grok inaccurately claimed video footage of fires in Tehran was from Los Angeles in 2017.
- AI "analysis" was cited to misidentify a missile falling near a Minab school, despite munitions experts and fragments at the scene confirming it as a US Tomahawk.
The Alarming Rise of Generative AI in Misinformation Campaigns
The surge in AI-generated "slop" is driven by two key factors. First, the ease with which anyone can now create hyper-realistic fake videos or photos. But even more profoundly, people are increasingly relying on AI to summarize news and answer questions, rather than seeking out original sources. Google AI summaries and Grok, widely rolled out in mid-2024, have become ubiquitous; 65% of people now regularly see AI summaries, and the number using generative AI for information doubled in the past year.
This reliance is a ticking time bomb. A 2025 international study found that roughly half of all AI-generated summaries contained at least one significant sourcing or accuracy issue. For some tools, like Google's popular Gemini interface, that figure soared to a staggering 76%.
Why AI Hallucinates: The Truth About LLMs and "Truth Boxes"
The core problem lies in how Large Language Models (LLMs) like Grok, ChatGPT, and Gemini actually function. They are, at their basic level, probabilistic language models. They construct sentences by predicting the most likely next word, creating highly convincing, authoritative-sounding text. However, this process does not involve genuine analysis or an understanding of truth.
"AI is perceived as an omniscient entity with access to everything, but without emotions," explains Tal Hagin, an open-source intelligence analyst. "What you are using is actually a very advanced probability machine, not a truth box."
When challenged on the Minab photograph, Gemini initially offered another incorrect location and year (Gaza, November 2023). Pressed again, it revised to Tehran during the Covid pandemic. When explicitly told the photo was from Iran in 2025, it changed its story again, citing an earthquake in southern Iran. This constant, confident, but utterly false revision highlights a critical flaw: AI doesn't learn from correction in the human sense; it merely recalculates probabilities to generate a new plausible-sounding lie.
The Real-World Impact: Eroding Trust and Denying Atrocities
For open-source investigators like Chris Osieck, who probes civilian casualty bombings, this tidal wave of AI-generated falsehoods means precious time is wasted debunking digital fakes rather than documenting real-world atrocities. "That time should be devoted to what matters most: reporting on the impact this brutal war has on the people caught in the crossfire."
Even more alarmingly, this "AI slop" risks blurring the lines between truth and fiction to such an extent that genuine evidence of human rights abuses or war crimes might be dismissed as fake. Sardarizadeh warns, "As the technology continues to get better, it could muddy the waters so much that videos and images of real atrocities get dismissed as fake or AI." He's already witnessed this phenomenon in the conflicts in Gaza and Ukraine.
The platforms themselves offer little solace. X and Google declined to comment for this story, and their AI services quietly note in their small print that they may produce inaccurate results. This disclaimer, however, does little to mitigate the profound human cost.
Imagine losing a child in a conflict, and then witnessing AI being used online to claim that the event never happened. "That is not just an obstacle for investigators. It is also deeply disrespectful to the loved ones who are grieving," Osieck emphasizes. In an age where information is power, AI's dangerous delusions are not just an inconvenience—they are a threat to truth, accountability, and ultimately, human dignity.



