PulseAugur
LIVE 00:08:41
ENTITY LVLM

LVLM

PulseAugur coverage of LVLM — every cluster mentioning LVLM across labs, papers, and developer communities, ranked by signal.

Total · 30d
16
16 over 90d
Releases · 30d
0
0 over 90d
Papers · 30d
16
16 over 90d
TIER MIX · 90D
RECENT · PAGE 1/1 · 7 TOTAL
  1. RESEARCH · CL_18313 ·

    New benchmark evaluates copyright unlearning in Large Vision-Language Models

    Researchers have developed CoVUBench, a new benchmark designed to evaluate the effectiveness of machine unlearning techniques for large vision-language models (LVLMs). This benchmark addresses the challenge of LVLMs mem…

  2. TOOL · CL_15772 ·

    VAUQ framework enhances LVLM self-evaluation by measuring visual evidence dependence

    Researchers have developed VAUQ, a new framework designed to improve the self-evaluation capabilities of Large Vision-Language Models (LVLMs). This method addresses the tendency of LVLMs to hallucinate by explicitly mea…

  3. RESEARCH · CL_14047 ·

    LightKV reduces LVLM KV cache size and computation by compressing vision tokens

    Researchers have developed LightKV, a new method to reduce the GPU memory overhead associated with Large Vision-Language Models (LVLMs). By exploiting redundancy in vision-token embeddings and using prompt-aware guidanc…

  4. RESEARCH · CL_11391 ·

    Visual text style impacts LVLM descriptions despite correct concept identification

    A new research paper explores how the visual style of text in images affects the descriptions generated by Large Visual Language Models (LVLMs). The study found that even when LVLMs correctly identify the text's concept…

  5. RESEARCH · CL_08293 ·

    Dynamic Decision Learning framework improves rare disease diagnosis in LVLMs

    Researchers have developed Dynamic Decision Learning (DDL), a novel framework designed to improve the accuracy and reliability of large vision-language models (LVLMs) when diagnosing rare diseases. DDL allows frozen LVL…

  6. RESEARCH · CL_04952 ·

    LVLMs improve surgical safety assessment with Sum-of-Checks framework

    Researchers have developed a new framework called Sum-of-Checks to improve the reliability and transparency of large vision-language models (LVLMs) in surgical safety assessments. This method breaks down critical safety…

  7. RESEARCH · CL_02088 ·

    VG-CoT: Towards Trustworthy Visual Reasoning via Grounded Chain-of-Thought

    Researchers have introduced VG-CoT, a new dataset designed to improve the trustworthiness of Large Vision-Language Models (LVLMs). This dataset automatically links reasoning steps to specific visual evidence within imag…