ArtFrame
Forensic media detection, responsibly built

See what the camera can no longer prove.

ArtFrame inspects images, video, and audio for traces of synthetic generation — compression forensics, frequency signatures, sensor noise, temporal drift — then shows you why it thinks what it thinks.

No model black box
Every output watermarked
You see every signal
SAMPLE · 256×256
● SCANNING
EXIFabsent
ELA μ1.24
FFT HF/MF0.38
noise σ²6.81
smooth%67%
80%
AI likely
Error level analysisFrequency domainSensor noise residualEXIF auditTemporal driftSpectral prosodyTexture variance
Error level analysisFrequency domainSensor noise residualEXIF auditTemporal driftSpectral prosodyTexture variance
01 — The method

Forensics first. Confidence scores second.

Most detectors hand you a single number and hope you trust it. ArtFrame runs an ensemble of classical forensic signals — the ones published in the image-forensics literature for two decades — and shows you each one, with its reasoning, before computing a verdict.

01
Step 1

Upload

Drop an image, a clip, a voice memo. Nothing is shared publicly. Every upload is bound to your account with an audit trail.

02
Step 2

Inspect

ArtFrame runs six parallel forensic signals on images, a per-frame ensemble plus temporal drift on video, and spectral profile checks on audio.

03
Step 3

Interpret

You get a verdict, a confidence score, and — crucially — the individual signal scores with plain-language reasons for each.

02 — What we look at

Six signals for images.
One honest verdict.

Each signal produces an independent 0–100 "AI-ness" score. The final verdict is a weighted ensemble — and you see every component, not just the answer.

01

Metadata / EXIF audit

Real cameras leave behind make/model, capture time, and exposure data. Its absence is a soft signal; AI-labeled Software tags are a hard one.

02

Error Level Analysis

Re-encoding the image at a fixed JPEG quality and diffing. Flat error maps suggest the pixels never lived through a real camera sensor.

03

Frequency-domain profile

2D FFT over a normalized crop. Generative models often over-smooth high frequencies or leave periodic spectral peaks.

04

Sensor noise residual

Natural photographs carry photon-shot noise with specific variance. Synthetic images are often too clean for real sensors.

05

Block texture variance

Diffusion outputs produce large contiguous regions of ultra-smooth texture. We count them.

06

File header inspection

PNG without compression history, missing markers, or explicit AI strings (stable-diffusion, midjourney) in the first 4KB.

03 — The line we hold

A lab, not a weapon.

The Transformation Lab is deliberately limited. You get stylization — sketch, oil, watercolor, cyberpunk, vintage, duotone, mosaic, pixel — and nothing that imitates a specific person.

  • No identity transfer. No face-swap. No voice cloning.
  • Every output is diagonally watermarked. Every output carries a bottom-right badge. The JPEG comment records style, user ID, and timestamp.
  • Daily quota of 10 transformations per account. Every request is audit-logged.
  • Before anything runs, you confirm the media is yours and you accept the AI-generated label on the output.

Start with one image.
See what it's hiding.