Google Research has introduced SensorLM and LSM-2, two new families of foundation models designed to interpret data from wearable sensors. SensorLM connects multimodal sensor signals to natural language, enabling a deeper understanding of health and activities by generating descriptive text from raw data. LSM-2, utilizing a novel self-supervised learning approach called Adaptive and Inherited Masking (AIM), is specifically designed to learn effectively from incomplete or fragmented sensor data, overcoming a common limitation in wearable technology. These models were trained on extensive datasets, including nearly 2.5 million person-days of de-identified data from over 100,000 individuals, setting new benchmarks in sensor data understanding and activity recognition. AI
Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →
RANK_REASON The cluster describes new foundation models and novel self-supervised learning techniques presented in research papers from Google AI.