PulseAugur
LIVE 04:24:41
ENTITY Large Multimodal Models

Large Multimodal Models

PulseAugur coverage of Large Multimodal Models — every cluster mentioning Large Multimodal Models across labs, papers, and developer communities, ranked by signal.

Total · 30d
11
11 over 90d
Releases · 30d
0
0 over 90d
Papers · 30d
11
11 over 90d
TIER MIX · 90D
RECENT · PAGE 1/1 · 3 TOTAL
  1. TOOL · CL_30558 ·

    New FIKA-Bench tests AI knowledge acquisition beyond visual recognition

    Researchers have introduced FIKA-Bench, a new benchmark designed to evaluate the ability of AI systems to acquire knowledge about unfamiliar objects, moving beyond simple visual recognition. The benchmark consists of 31…

  2. RESEARCH · CL_27969 ·

    New benchmarks reveal major gaps in multimodal context learning for LLMs

    Two new benchmarks, MMCL-Bench and Personal-VCL-Bench, have been introduced to evaluate the multimodal context learning capabilities of large language models. MMCL-Bench focuses on learning from visual rules, procedures…

  3. TOOL · CL_28006 ·

    New method enhances LMM spatial reasoning with generated viewpoints

    Researchers have introduced a new paradigm called Thinking with Novel Views (TwNV) to enhance the spatial reasoning capabilities of Large Multimodal Models (LMMs). This approach integrates generative novel-view synthesi…