PulseAugur
LIVE 03:37:40
research · [1 source] ·
0
research

New framework tackles disguise makeup attacks on facial recognition systems

Researchers have developed a novel framework to detect disguise makeup presentation attacks, which are particularly challenging for facial recognition systems. The proposed method uses a two-phase approach: first, a style-invariant full-face model extracts attention scores, and second, a patch-based analysis performs localized discrimination. This framework was tested on a newly constructed dataset and demonstrated strong generalization capabilities, outperforming previous methods. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

IMPACT Improves the robustness of facial recognition systems against sophisticated cosmetic-based spoofing attacks.

RANK_REASON This is a research paper detailing a new framework for a specific AI safety problem.

Read on arXiv cs.CV →

COVERAGE [1]

  1. arXiv cs.CV TIER_1 · Fateme Taraghi, Atefe Aghaei, Mohsen Ebrahimi Moghaddam ·

    Generalized Disguise Makeup Presentation Attack Detection Using an Attention-Guided Patch-Based Framework

    arXiv:2604.26025v1 Announce Type: new Abstract: Despite significant advances in facial recognition systems, they remain vulnerable to face presentation attacks. Among them, disguise makeup attacks are particularly challenging, as they use advanced cosmetics, prosthetic components…