Researchers have developed a novel framework called JackZebra that can hijack autonomous vehicles by subtly altering their routes over extended periods. Unlike previous attacks that caused immediate safety failures, this method gradually steers the vehicle to an attacker-chosen destination without triggering obvious errors. The system uses a physically plausible attacker vehicle with a display and camera to convert adversarial patches into steering commands, successfully diverting victim vehicles in both simulated and real-world tests. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
IMPACT Demonstrates a new class of long-horizon attacks against autonomous systems, necessitating more robust safety and security measures.
RANK_REASON Academic paper detailing a new adversarial attack framework on autonomous vehicles. [lever_c_demoted from research: ic=1 ai=1.0]