AI Detection

Is this photo real or AI-generated?
EXIF data tells the truth

AI image generators like Midjourney, DALL-E and Stable Diffusion can produce incredibly realistic photos. But there is one thing they almost never fake: EXIF metadata. Every real camera writes a detailed technical fingerprint into every photo file. AI images almost never contain this data, and when they do, it rarely adds up correctly.

๐Ÿ“ท

Real photo, what you see in EXIF

  • Camera make and model (e.g. Canon EOS R5)
  • Lens model and focal length
  • Shutter speed, aperture, ISO
  • Exact date and time of capture
  • GPS coordinates (if enabled)
  • Color space, white balance, orientation
  • Serial numbers, firmware version
๐Ÿค–

AI image, what EXIF typically shows

  • No EXIF data at all (most common)
  • Only basic file info (dimensions, color space)
  • Generic software tag like "Adobe Photoshop"
  • No camera, no lens, no exposure settings
  • Suspiciously round numbers if data is present
  • Creation date matches today, not when "shot"
  • No GPS, even for outdoor scenes

Why EXIF data is the best first check

When a camera takes a photo, it automatically embeds a large amount of technical metadata directly into the image file. This happens at the hardware level, the camera's processor writes it without any human input. The data includes the exact camera model and serial number, the lens, every exposure setting, the precise timestamp, and often GPS coordinates.

AI image generators don't have a camera, a lens, or a shutter. They generate pixels mathematically from a text prompt. There is no hardware writing EXIF data, so the result is almost always a blank EXIF record, or at most a few basic file properties that image editing software adds afterward.

This makes EXIF the fastest and most reliable first check when you're unsure whether an image is genuine or artificially generated.

Quick tip: Open the TS EXIF Reader, right-click on a suspicious image, and choose "Show EXIF data". If you see a camera model, lens, shutter speed and ISO, the image is very likely real. If the EXIF is empty or only shows file dimensions, treat it with scepticism.

The six EXIF signals that reveal an AI image

Not all AI images are the same, and some people try to fake authenticity by editing EXIF data afterward. Here are the six signals to look for, roughly in order of reliability.

๐Ÿšจ
1. No EXIF data at all

The strongest signal. If an image has no camera data, no lens, no shutter speed, no timestamp, it was almost certainly not taken with a real camera. Screenshots and social media crops also lack EXIF, so context matters.

๐Ÿ”ข
2. Round or impossible exposure values

Real photographers use values like 1/640s at f/4.0, ISO 800. Faked EXIF often shows suspiciously round numbers like 1/100s at f/8.0, ISO 100, or values that don't make sense for the scene (ISO 100 in a dark forest at night).

๐Ÿ“…
3. Creation date matches today

AI generators output a fresh file. Even if someone adds fake EXIF, the file creation timestamp in the metadata often reveals when it was really made. A "wildlife photo from 2019" with a file date from last week deserves scrutiny.

๐Ÿ–ฅ๏ธ
4. Software tag says "Photoshop" or "Lightroom"

Many AI images are run through editing software before sharing, which writes its own software tag. Seeing "Adobe Photoshop 25.0" instead of a camera model doesn't prove AI, but combined with missing exposure data it is a clear warning sign.

๐Ÿ“
5. No GPS data for outdoor scenes

Modern smartphones and many cameras with GPS enabled record coordinates automatically. An outdoor landscape "photographed on location" without any GPS data is suspicious, especially if the rest of the EXIF is thin.

๐Ÿ“
6. Unusual or perfect image dimensions

Real cameras produce specific sensor dimensions (e.g. 6240ร—4160 for a Canon R5). AI generators tend to output images in round sizes like 1024ร—1024, 1920ร—1080 or 2048ร—1536. A suspicious resolution combined with missing EXIF is a strong combined signal.

What EXIF cannot tell you

EXIF is a powerful first indicator, but it has limits. A few important caveats:

EXIF can be stripped. Almost all social media platforms (Instagram, Facebook, Twitter/X, WhatsApp) automatically remove EXIF data when you upload a photo. This means a genuine photo shared via WhatsApp will arrive with no EXIF, not because it's AI, but because the platform stripped it. Always check the original file if possible.

EXIF can be faked. It is technically possible to add EXIF data to an AI-generated image using editing software. A determined bad actor can insert a plausible camera model, lens and exposure values. This is why EXIF should be one signal among several, not the only check.

Screenshots and screen recordings never contain camera EXIF. If someone screenshots a real photo and shares that, the EXIF will be gone. This is different from an AI image, but visually indistinguishable from a EXIF perspective.

Good to know: Some AI generators are starting to embed metadata in a different format called C2PA (Content Credentials). This is a newer standard designed specifically to declare whether an image was AI-generated. The TS EXIF Reader currently reads EXIF. C2PA support is on the roadmap.

EXIF as part of a broader check

For a reliable verdict, combine EXIF analysis with a few other quick checks:

Quick AI detection checklist

A practical example: wildlife photography

As a wildlife photographer myself, I see AI-generated animal images shared as real photographs more and more often. A "perfect" eagle portrait with no motion blur, impossibly sharp feather detail, and a flattering pose, it looks stunning, but something feels off.

The EXIF check is always my first step. A genuine wildlife photo of an eagle in flight should show a shutter speed of at least 1/1000s, a telephoto lens of 400mm or longer, a high ISO (the light is never perfect in the field), and a timestamp at dawn or dusk when birds are active. If I see none of that, no camera, no lens, no exposure data, I'm immediately suspicious.

This is exactly why I built the AI detection context into TS EXIF Reader. It's not just about satisfying curiosity about camera settings. In 2026, being able to quickly verify whether an image is genuine is a genuinely useful skill, for photographers, journalists, and anyone who cares about what's real.

Check it yourself, for free

Install TS EXIF Reader and right-click any suspicious image. The EXIF data tells you the truth in seconds.

Install TS EXIF Reader, free

Frequently asked questions

Can AI images ever have EXIF data?

Yes, in two ways. Some AI platforms are beginning to embed C2PA Content Credentials, a metadata standard that declares the image as AI-generated. More commonly, someone can manually add fake EXIF to an AI image using tools like ExifTool or Photoshop. This is why missing EXIF is a strong signal, but present EXIF is not an absolute guarantee of authenticity. Always check whether the values make sense for the scene.

Does it work on social media photos?

Unfortunately most social platforms strip EXIF when you upload. This means if you right-click a photo on Instagram or X and check the EXIF, it will almost always be empty, even for genuine photos. For a meaningful check, you need the original file. The From Page tab of TS EXIF Reader works best on photography websites, news sites and portfolio pages that serve original files.

What if the EXIF looks real but the image still seems off?

Trust your instincts. EXIF can be faked. If the exposure values seem implausible for the scene, the dimensions are atypical for the supposed camera, or the file date doesn't match the "capture date" in the EXIF, those are red flags. Combine EXIF analysis with a visual check and a reverse image search for the most reliable verdict.

Is this only relevant for photography?

No. Journalists, fact-checkers, insurance investigators, lawyers, and anyone dealing with user-submitted images can benefit from EXIF analysis. A photo submitted as evidence should have consistent, verifiable EXIF. Missing or inconsistent metadata is worth investigating.