Deepfake Audio Detection

Wav2Vec 2.0 fine-tuned on ASVspoof 2019 LA • Cross-dataset evaluated on ASVspoof 2021 LA & WaveFake

Deep Learning Audio Forensics

Is this voice real?

Modern AI can clone any voice from just a few seconds of audio. This detector uses Wav2Vec 2.0 to tell synthetic speech apart from authentic recordings — with 0.69% Equal Error Rate on the ASVspoof 2019 LA benchmark.

Why this matters
Voice deepfakes are already in the wild
📞
Phone scams

Voice clones are increasingly used to impersonate family members in "emergency call" scams. Reported cases have surged since 2022, with losses running into millions of dollars annually.

📰
Misinformation

Fabricated political speeches, fake celebrity endorsements, and false statements attributed to public figures have circulated widely on social media platforms in recent election cycles.

⚖️
Trust in evidence

Courts now have to grapple with whether audio recordings are authentic. The same challenge applies to investigative journalism and historical archive verification.

Try the detector
Upload your own audio, record from your microphone, or pick an example to start.