How Krites Works

A multi-signal forensic approach to deepfake detection with transparent, interpretable analysis. Unlike black-box solutions, we show you exactly how and why media was flagged.

The Detection Pipeline

1

Upload

Upload image, video, or audio file

2

Process

Extract frames and audio tracks

3

Analyze

Run three detection signals

4

Report

Generate forensic report

Detection Signals

Visual Analysis

EfficientNet-B7Google

State-of-the-art image classification that detects facial artifacts, lighting inconsistencies, and unnatural textures.

Face boundary artifactsLighting inconsistenciesGAN texture patternsBlending anomalies

Audio Analysis

Wav2Vec 2.0Meta

Self-supervised speech model that identifies synthetic audio patterns and voice cloning artifacts.

Spectral anomaliesSynthetic breathingUnnatural prosodyMissing micro-variations

Lip-Sync Analysis

AV-HuBERTMeta

Audio-visual speech model that measures synchronization between lip movements and audio.

A/V sync offsetPhoneme mismatchTemporal inconsistencyMouth movement artifacts

Why Multi-Signal Detection?

Single-Model Limitations

  • • Fails on unfamiliar deepfake methods
  • • Easily fooled by post-processing
  • • No explanation of detection
  • • High false positive rates

Multi-Signal Advantages

  • • Robust against new techniques
  • • Cross-validation between signals
  • • Detailed, interpretable explanations
  • • Higher accuracy and confidence

Ready to Try It?

Upload your media and get transparent analysis in seconds.

Analyze Media