PulseAugur
LIVE 08:03:22
research · [2 sources] ·
0
research

New HARMES dataset combines motion, environmental, and audio data for activity recognition

Researchers have introduced HARMES, a new multi-modal dataset for wearable human activity recognition. The dataset combines motion sensing, environmental data, and audio from wrist-worn devices, totaling over 80 hours of data from 20 participants performing household activities. HARMES is designed to improve the recognition of daily living activities, which can be ambiguous with single modalities, and is significantly larger than previous datasets of its kind. AI

Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →

IMPACT Provides a large, multi-modal dataset to advance research in wearable human activity recognition.

RANK_REASON Publication of a new dataset on arXiv.

Read on arXiv cs.LG →

COVERAGE [2]

  1. arXiv cs.LG TIER_1 · Robin Burchard, Pascal-Andr\'e Br\"uckner, Marius Bock, Juergen Gall, Kristof Van Laerhoven ·

    HARMES: A Multi-Modal Dataset for Wearable Human Activity Recognition with Motion, Environmental Sensing and Sound

    arXiv:2605.02596v1 Announce Type: new Abstract: With each sensing modality exhibiting inherent strengths and limitations, multi-modal approaches for wearable Human Activity Recognition (HAR) are becoming increasingly relevant -- particularly for recognizing Activities of Daily Li…

  2. arXiv cs.LG TIER_1 · Kristof Van Laerhoven ·

    HARMES: A Multi-Modal Dataset for Wearable Human Activity Recognition with Motion, Environmental Sensing and Sound

    With each sensing modality exhibiting inherent strengths and limitations, multi-modal approaches for wearable Human Activity Recognition (HAR) are becoming increasingly relevant -- particularly for recognizing Activities of Daily Living (ADLs), where individual modalities often p…