Researchers have developed SGAP-Gaze, a novel network for estimating driver gaze that integrates both facial and surrounding scene information. This approach uses a Transformer-based attention mechanism to fuse features from driver faces and traffic scenes, creating a more comprehensive gaze intent vector. The model achieved a significant reduction in mean pixel error compared to existing methods on the Urban Driving-Face Scene Gaze (UD-FSG) dataset, demonstrating improved accuracy in real-world driving scenarios. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
IMPACT Improves driver gaze estimation accuracy by integrating scene context, potentially enhancing driver monitoring systems.
RANK_REASON This is a research paper describing a new model and dataset for driver gaze estimation.