PulseAugur
LIVE 12:25:41
research · [9 sources] ·
0
research

New benchmarks and frameworks tackle AI agent limitations in website generation and remote sensing tasks

Researchers have introduced InteractWeb-Bench, a new benchmark designed to evaluate multimodal large language models (MLLMs) in website generation tasks. This benchmark simulates real-world conditions where user instructions can be ambiguous or contradictory, a scenario termed 'blind execution.' Experiments using InteractWeb-Bench reveal that current frontier MLLM-based agents struggle with intent recognition and adaptive interaction in these complex scenarios. The benchmark includes an interactive environment with actions like Clarify, Implement, Verify, and Submit to facilitate iterative refinement. AI

Summary written by gemini-2.5-flash-lite from 9 sources. How we write summaries →

IMPACT New benchmark highlights limitations in current multimodal agents for website generation, indicating a need for improved intent recognition and interaction capabilities.

RANK_REASON This is a research paper introducing a new benchmark for evaluating multimodal models.

Read on arXiv cs.CV →

COVERAGE [9]

  1. arXiv cs.AI TIER_1 · Qiyao Wang, Haoran Hu, Longze Chen, Hongbo Wang, Hamid Alinejad-Rokny, Yuan Lin, Min Yang ·

    InteractWeb-Bench: Can Multimodal Agent Escape Blind Execution in Interactive Website Generation?

    arXiv:2604.27419v1 Announce Type: new Abstract: With the advancement of multimodal large language models (MLLMs) and coding agents, the website development has shifted from manual programming to agent-based project-level code synthesis. Existing benchmarks rely on idealized assum…

  2. arXiv cs.CL TIER_1 · Min Yang ·

    InteractWeb-Bench: Can Multimodal Agent Escape Blind Execution in Interactive Website Generation?

    With the advancement of multimodal large language models (MLLMs) and coding agents, the website development has shifted from manual programming to agent-based project-level code synthesis. Existing benchmarks rely on idealized assumptions, especially for well-structured, informat…

  3. arXiv cs.CV TIER_1 · Guanchun Wang, Chenxiao Wu, Xiangrong Zhang, Zelin Peng, Jianxun Lai, Tianyang Zhang, Xu Tang ·

    Seeking Consensus: Geometric-Semantic On-the-Fly Recalibration for Open-Vocabulary Remote Sensing Semantic Segmentation

    arXiv:2604.26221v1 Announce Type: new Abstract: Open-vocabulary semantic segmentation (OVSS) in remote sensing images is a promising task that employs textual descriptions for identifying undefined land cover categories. Despite notable advances, existing methods typically employ…

  4. arXiv cs.CV TIER_1 · Hengtong Shen, Li Yan, Hong Xie, Yaxuan Wei, Xinhao Li, Wenfei Shen, Peixian Lv, Fei Tan ·

    Foundation Model-Driven Semantic Change Detection in Remote Sensing Imagery

    arXiv:2602.13780v2 Announce Type: replace Abstract: Remote sensing (RS) change detection is essential for interpreting surface dynamics. Semantic change detection (SCD) further enables pixel-level understanding of multi-class transitions, yet remains sensitive to pseudo-changes i…

  5. arXiv cs.CV TIER_1 · Xu Tang ·

    Seeking Consensus: Geometric-Semantic On-the-Fly Recalibration for Open-Vocabulary Remote Sensing Semantic Segmentation

    Open-vocabulary semantic segmentation (OVSS) in remote sensing images is a promising task that employs textual descriptions for identifying undefined land cover categories. Despite notable advances, existing methods typically employ a static inference paradigm, overlooking the di…

  6. arXiv cs.CV TIER_1 · Swadhin Das, Vivek Yadav ·

    JSSFF: A Joint Structural-Semantic Fusion Framework for Remote Sensing Image Captioning

    arXiv:2604.24031v1 Announce Type: new Abstract: The encoder-decoder framework has become widely popular nowadays. In this model, the encoder extracts informative visual features from an input image, and the decoder employs a sequence-to-sequence formulation to generate the corres…

  7. arXiv cs.CV TIER_1 · Yanpei Gong, Beichen Zhang, Hao Wang, Zhaobo Qi, Xinyan Liu, Yuanrong Xu, Ruiyang Gao, Weigang Zhang ·

    STAND: Semantic Anchoring Constraint with Dual-Granularity Disambiguation for Remote Sensing Image Change Captioning

    arXiv:2604.23309v1 Announce Type: new Abstract: Remote sensing image change captioning (RSICC) aims to describe the difference between two remote sensing images. While recent methods have explored video modeling, they largely overlook the inherent ambiguities in viewpoint, scale,…

  8. arXiv cs.CV TIER_1 · Ziyun Chen, Fan Liu, Liang Yao, Chuanyi Zhang, Yuye Ma, Wei Zhou ·

    Evaluating Remote Sensing Image Captions Beyond Metric Biases

    arXiv:2604.22855v1 Announce Type: new Abstract: The core objective of image captioning is to achieve lossless semantic compression from visual signals into textual modalities. However, the reliance on manually curated reference texts for evaluation essentially forces models to mi…

  9. arXiv cs.CV TIER_1 · Vivek Yadav ·

    JSSFF: A Joint Structural-Semantic Fusion Framework for Remote Sensing Image Captioning

    The encoder-decoder framework has become widely popular nowadays. In this model, the encoder extracts informative visual features from an input image, and the decoder employs a sequence-to-sequence formulation to generate the corresponding textual description from these features.…