PulseAugur
LIVE 13:02:48
tool · [1 source] ·
0
tool

Crowdsourcing struggles to reliably detect audiovisual deepfakes

A new research paper explores the effectiveness of crowdsourcing for detecting audiovisual deepfakes. The study found that while crowd workers are generally good at identifying authentic videos, they frequently miss manipulated content and struggle to accurately pinpoint the type or timestamps of manipulation. Aggregating judgments can improve the detection of authenticity but does not fully address the issue of missed manipulations or the difficulty in identifying specific manipulation types, especially for audio-video combined deepfakes. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

IMPACT Highlights limitations in current crowdsourcing methods for detecting sophisticated audiovisual manipulations.

RANK_REASON Academic paper on crowdsourced deepfake detection. [lever_c_demoted from research: ic=1 ai=1.0]

Read on arXiv cs.AI →

COVERAGE [1]

  1. arXiv cs.AI TIER_1 · Stefano Mizzaro ·

    Beyond Seeing Is Believing: On Crowdsourced Detection of Audiovisual Deepfakes

    Deepfakes are increasingly realistic and easy to produce, raising concerns about the reliability of human judgments in misinformation settings. We study audiovisual deepfake detection by measuring how consistently crowd workers distinguish authentic from manipulated videos and, w…