few-shot learning
PulseAugur coverage of few-shot learning — every cluster mentioning few-shot learning across labs, papers, and developer communities, ranked by signal.
No coverage in the last 90 days.
1 day(s) with sentiment data
-
LoRA emerges as a viable parametric knowledge memory for LLMs, complementing RAG and ICL
A new paper explores the use of Low-Rank Adaptation (LoRA) as a method for continuously updating knowledge in large language models. The research empirically analyzes LoRA's capacity, composability, and optimization for…
-
New research explains how transformers perform in-context learning via gradient descent
Two new arXiv papers explore the theoretical underpinnings of in-context learning (ICL) in transformers. One paper demonstrates how transformers can perform in-context logistic regression by implicitly executing normali…
-
Researchers Compare In-Context and Agentic Learning Under Constraints
Researchers explored the differences between in-context learning and agentic learning, focusing on how adaptive queries impact performance under realizability constraints. They found that adaptivity generally does not h…
-
Researchers explore weight decay, in-context learning, and acceleration for Transformer models
Researchers have developed several new methods to improve the efficiency and theoretical understanding of Transformer models. One paper provides a functional-analytic characterization of weight decay, demonstrating its …
-
Researchers analyze attention heads to understand in-context learning in LLMs
Researchers have developed a new framework called Task Subspace Logit Attribution (TSLA) to analyze how large language models perform in-context learning. This framework identifies specific attention heads responsible f…
-
Prompt engineering guide distills 58 techniques for life sciences
A new guide distills 58 prompt engineering techniques into six core methods for life sciences researchers. It focuses on zero-shot, few-shot, thought generation, ensembling, self-criticism, and decomposition, providing …
-
LLMs show promise in scientific text categorization with prompt chaining
Researchers have explored the use of Large Language Models (LLMs) for automatically categorizing scientific texts using prompt engineering techniques. Their study evaluated In-Context Learning (ICL) and Prompt Chaining …
-
Tabular foundation models enable real-time knowledge tracing with 53x speedup
Researchers have introduced a new approach to knowledge tracing called "live knowledge tracing," which utilizes tabular foundation models (TFMs) for real-time adaptation. This method bypasses traditional offline trainin…
-
Transformer research probes security flaws, training dynamics, and in-context learning limits
Researchers have identified vulnerabilities in the shuffling defense mechanism used to secure Transformer models during inference, demonstrating an attack that can extract model weights by aligning permuted activations.…