ArtFrame inspects images, video, and audio for traces of synthetic generation — compression forensics, frequency signatures, sensor noise, temporal drift — then shows you why it thinks what it thinks.
Most detectors hand you a single number and hope you trust it. ArtFrame runs an ensemble of classical forensic signals — the ones published in the image-forensics literature for two decades — and shows you each one, with its reasoning, before computing a verdict.
Drop an image, a clip, a voice memo. Nothing is shared publicly. Every upload is bound to your account with an audit trail.
ArtFrame runs six parallel forensic signals on images, a per-frame ensemble plus temporal drift on video, and spectral profile checks on audio.
You get a verdict, a confidence score, and — crucially — the individual signal scores with plain-language reasons for each.
Each signal produces an independent 0–100 "AI-ness" score. The final verdict is a weighted ensemble — and you see every component, not just the answer.
Real cameras leave behind make/model, capture time, and exposure data. Its absence is a soft signal; AI-labeled Software tags are a hard one.
Re-encoding the image at a fixed JPEG quality and diffing. Flat error maps suggest the pixels never lived through a real camera sensor.
2D FFT over a normalized crop. Generative models often over-smooth high frequencies or leave periodic spectral peaks.
Natural photographs carry photon-shot noise with specific variance. Synthetic images are often too clean for real sensors.
Diffusion outputs produce large contiguous regions of ultra-smooth texture. We count them.
PNG without compression history, missing markers, or explicit AI strings (stable-diffusion, midjourney) in the first 4KB.
The Transformation Lab is deliberately limited. You get stylization — sketch, oil, watercolor, cyberpunk, vintage, duotone, mosaic, pixel — and nothing that imitates a specific person.