Researchers have introduced SignVerse-2M, a large-scale dataset comprising two million video clips of over 25 sign languages. This dataset is pose-native, meaning it converts raw videos into 2D pose sequences using DWPose. This approach aims to improve the robustness of sign language recognition and generation models by focusing on pose rather than appearance, making them more suitable for real-world, open-scenario applications. AI
Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →
IMPACT Provides a new, pose-native dataset to advance research in sign language recognition and generation models.
RANK_REASON New academic paper introducing a large-scale dataset for sign language research.