Researchers have developed a new gesture recognition system for smart eyewear that fuses data from low-resolution Time-of-Flight and Infrared thermal sensors. This approach is designed to be lightweight and privacy-preserving, overcoming the power and latency issues of traditional vision-based methods. A compact Convolutional Neural Network processes the fused sensor data on a microcontroller, achieving 92.3% accuracy and 0.93 F1-score on a dataset of seven static gestures. The system is optimized for resource-constrained wearables, requiring minimal parameters and low power consumption for millisecond-level inference. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
IMPACT Enables more power-efficient and private gesture control for AR wearables.
RANK_REASON Publication of an academic paper on a novel method for gesture recognition. [lever_c_demoted from research: ic=1 ai=1.0]