PulseAugur
LIVE 15:25:47
research · [7 sources] ·
0
research

New research explores adversarial attacks and critic-based VLA models for autonomous driving safety

Researchers are developing methods to create adversarial patches that can fool vision-language models (VLMs) used in autonomous driving. These patches, when physically applied, can cause systems to miss pedestrians or misinterpret road conditions. Studies show high transferability rates between different VLM architectures, meaning an attack optimized for one model can still be effective against others, posing a significant safety risk. AI

Summary written by gemini-2.5-flash-lite from 7 sources. How we write summaries →

IMPACT New research highlights significant vulnerabilities in autonomous driving perception systems, potentially requiring new defense mechanisms against adversarial attacks.

RANK_REASON The cluster contains multiple arXiv papers detailing research on adversarial attacks against vision-language models for autonomous driving.

Read on arXiv cs.CV →

COVERAGE [7]

  1. Hugging Face Daily Papers TIER_1 ·

    Judge, Then Drive: A Critic-Centric Vision Language Action Framework for Autonomous Driving

    Recent advances in vision language action (VLA) models have shown remarkable potential for autonomous driving by directly mapping multimodal inputs to control signals. However, previous VLA-based methods have not explicitly exploited the critic capability of VLAs to refine drivin…

  2. arXiv cs.CV TIER_1 · Lijin Yang, Jianing Huang, Zhongzhan Huang, Shu Liu, Hao Yang ·

    Judge, Then Drive: A Critic-Centric Vision Language Action Framework for Autonomous Driving

    arXiv:2604.27366v1 Announce Type: new Abstract: Recent advances in vision language action (VLA) models have shown remarkable potential for autonomous driving by directly mapping multimodal inputs to control signals. However, previous VLA-based methods have not explicitly exploite…

  3. arXiv cs.CV TIER_1 · David Fernandez, Pedram MohajerAnsari, Amir Salarpour, Mert D. Pese ·

    Understanding Adversarial Transferability in Vision-Language Models for Autonomous Driving: A Cross-Architecture Analysis

    arXiv:2604.27414v1 Announce Type: new Abstract: Vision-language models (VLMs) are increasingly used in autonomous driving because they combine visual perception with language-based reasoning, supporting more interpretable decision-making, yet their robustness to physical adversar…

  4. arXiv cs.CV TIER_1 · Mert D. Pese ·

    Understanding Adversarial Transferability in Vision-Language Models for Autonomous Driving: A Cross-Architecture Analysis

    Vision-language models (VLMs) are increasingly used in autonomous driving because they combine visual perception with language-based reasoning, supporting more interpretable decision-making, yet their robustness to physical adversarial attacks, especially whether such attacks tra…

  5. arXiv cs.CV TIER_1 · Zihui Zhu, Ziqi Zhou, Yichen Wang, Lulu Xue, Minghui Li, Shengshan Hu ·

    Transferable Physical-World Adversarial Patches Against Object Detection in Autonomous Driving

    arXiv:2604.23105v1 Announce Type: new Abstract: Deep learning drives major advances in autonomous driving (AD), where object detectors are central to perception. However, adversarial attacks pose significant threats to the reliability and safety of these systems, with physical ad…

  6. arXiv cs.CV TIER_1 · Shihui Yan, Ziqi Zhou, Yufei Song, Yifan Hu, Minghui Li, Shengshan Hu ·

    Transferable Physical-World Adversarial Patches Against Pedestrian Detection Models

    arXiv:2604.22552v1 Announce Type: new Abstract: Physical adversarial patch attacks critically threaten pedestrian detection, causing surveillance and autonomous driving systems to miss pedestrians and creating severe safety risks. Despite their effectiveness in controlled setting…

  7. arXiv cs.CV TIER_1 · Shengshan Hu ·

    Transferable Physical-World Adversarial Patches Against Pedestrian Detection Models

    Physical adversarial patch attacks critically threaten pedestrian detection, causing surveillance and autonomous driving systems to miss pedestrians and creating severe safety risks. Despite their effectiveness in controlled settings, existing physical attacks face two major limi…