Gemma 2B
PulseAugur coverage of Gemma 2B — every cluster mentioning Gemma 2B across labs, papers, and developer communities, ranked by signal.
No coverage in the last 90 days.
1 day(s) with sentiment data
-
Google I/O: Gemini 1.5 Pro, Gemma 2, and Genkit framework debut
Google's I/O 2024 introduced a comprehensive AI developer stack, highlighted by the Gemini 1.5 Pro model now available with a 2 million token context window. This massive context capability promises to simplify complex …
-
PERSA pipeline uses RLHF to align LLM feedback with instructor style
Researchers have developed PERSA, a novel approach using Reinforcement Learning from Human Feedback (RLHF) to adapt large language models for generating personalized educational feedback. This method specifically target…
-
Researchers develop SNMF for interpretable LLM feature analysis
Researchers have developed a new method for understanding the internal workings of large language models by decomposing MLP activations. This technique, semi-nonnegative matrix factorization (SNMF), identifies interpret…
-
AI safety research probes jailbreak success and emergent misalignment in LLMs
Two new research papers explore the underlying causes of AI safety failures in large language models. One paper introduces LOCA, a method to provide local, causal explanations for why specific jailbreak prompts succeed,…
-
New research identifies 'override gap' as key failure in LLM adaptation
Researchers have identified a knowledge conflict failure in hypernetwork-based methods for adapting large language models, where accuracy drops significantly when new information contradicts pre-existing knowledge. This…
-
Researchers develop new methods to debias and improve reward models for LLMs
Researchers have developed new methods to improve the reliability and interpretability of reward models (RMs) used in aligning large language models (LLMs). One approach introduces a causally motivated intervention tech…
-
Google DeepMind releases T5Gemma encoder-decoder LLMs adapted from Gemma
Google DeepMind has introduced T5Gemma, a new family of encoder-decoder large language models derived from their existing Gemma 2 models. This adaptation technique allows for flexible combinations of encoder and decoder…