Researchers have developed SIEVES, a novel method for improving the reliability of multimodal large language models (MLLMs) in out-of-distribution scenarios. SIEVES works by learning to estimate the quality of visual evidence provided by a reasoning model, enabling selective prediction. This approach significantly enhances model coverage, increasing it by up to three times on challenging benchmarks. Notably, SIEVES can be applied to proprietary models like Gemini-3-Pro without requiring access to their internal weights or logits. AI
Summary written by gemini-2.5-flash-lite from 3 sources. How we write summaries →
IMPACT Enhances MLLM reliability in real-world scenarios by improving selective prediction and generalization to unseen data.
RANK_REASON Academic paper introducing a new method for multimodal LLM generalization.