Gemma 3 4B
PulseAugur coverage of Gemma 3 4B — every cluster mentioning Gemma 3 4B across labs, papers, and developer communities, ranked by signal.
-
LLMs show significant bias in conflict monitoring, not ready for deployment
A new paper evaluates several large language models for their suitability in conflict monitoring tasks in West Africa. The study found that open-weight models like Gemma 3 4B and Llama 3.2 3B exhibit significant biases,…
-
New method debiases LLMs at decoding time, improving fairness without model retraining
Researchers have developed a novel method to mitigate biases in large language models during the decoding phase, without altering the model's weights. This approach uses a separate Process Reward Model (PRM) to score to…
-
Gemma 3 4B LLM confidence training shows mixed results, improves accuracy post-hoc
A study on the Gemma 3 4B model investigated methods to improve its verbal confidence in responses. Initial attempts using a filtered dataset for confidence-conditioned supervised fine-tuning (CSFT) yielded negative res…
-
New RAG methods for medical QA show mixed results, with multimodal approach outperforming fine-tuning on larger scales
Researchers have developed MED-VRAG, a novel iterative multimodal retrieval-augmented generation framework that processes medical document page images, including tables and figures, rather than just text. This system ac…