The internet is full of videos and images that look like news, but were actually made or altered with powerful AI. Some clips show missiles hitting an airport in Tel Aviv. Others show US soldiers captured at gunpoint by Iranian forces. None of those scenes actually happened, yet they have been shared widely.

Fake visual content spreads fast, corrections do not

Even when experts debunk these clips, the debunks do not always slow the spread. Fact-checkers say new falsified images and videos keep appearing faster than verification teams can handle. Meanwhile, genuine photos can be accused of manipulation, which makes people doubt real reporting about a real war.

One recent example: a contested Tehran crowd photo

A news organization had to publicly defend one of its photographs after an outside group suggested the image showed signs of "copy-paste duplication." The news outlet responded that the picture was taken by a journalist on March 9, 2026, and said the analysis presented by the critics was flawed. The outlet also reiterated that it does not use AI to create or alter images that claim to document real events.

The critics replied that their point was not to say the photo was fabricated, but to warn that some regimes have incentives to manipulate visuals and that news organizations must verify carefully. Responsible newsrooms do that verification, and they are sometimes criticized for taking time to reach a conclusion.

Three practical rules for not being fooled

I spoke with David Clinch, a media consultant and a founding partner of Storyful, an early verification shop. He offered three simple rules for anyone trying to judge images and videos online.

1. Don’t trust anyone online, including yourself

We all bring biases and wishes to what we see. That makes it easy to share something because we want it to be true. Pause before you hit share. If you are rushed, you are more likely to amplify a false story.

2. Find real experts

Some people build their careers around verifying visual claims. Follow and trust those experts. For example, a BBC analyst who researches fabricated clips recently pointed out that a viral video purporting to show a strike on a Tel Aviv tower was AI-generated. He noted the blast looked fake and that no such incident had been documented.

Also, beware pseudo-experts. Not every account or chatbot that sounds confident is reliable. Look for people or teams with a track record and transparent methods.

3. Don’t let one image stand for the whole story

Even an accurate photo rarely tells the full story. Context matters. Ask questions: who took the image, when, and where? What other reporting confirms the scene? If you can’t answer those questions, it is safer not to treat a single clip as proof of a bigger narrative.

Yes, this is inconvenient, and yes, it matters

Clinch was frank: expecting ordinary people to do some verification is unfair, but it is part of being a responsible consumer of news today. Slow down, check expertise, and demand context. Doing that reduces the chances you will spread an AI-made fake or help confuse others about a serious conflict.

  • Quick checklist:
    • Pause before sharing.
    • Look for verification from named experts or verification teams.
    • Ask for context beyond the single image or clip.

In short: assume nothing at first, trust credible verifiers, and always look for context. If you want to help, slow down before you share.