mixture of experts
PulseAugur coverage of mixture of experts — every cluster mentioning mixture of experts across labs, papers, and developer communities, ranked by signal.
- 2026-05-11 research_milestone A new paper proposes an enhanced Mixture-of-Experts framework for faster time series forecasting model training. source
4 day(s) with sentiment data
-
New simulator stress-tests AI emotional support chatbots with diverse user profiles
Researchers have developed a new controllable simulator to better evaluate emotional support chatbots. This simulator addresses limitations in current systems by incorporating diverse psychological and linguistic featur…
-
SMoES improves MoE-VLM efficiency and effectiveness with soft modality guidance
Researchers have introduced SMoES, a novel approach for guiding expert routing in Mixture-of-Experts (MoE) vision-language models (VLMs). This method utilizes dynamic soft modality scores to account for layer-dependent …
-
Researchers propose new methods to decouple model parameters from computation
Researchers have introduced novel methods to decouple model size from computational cost in deep learning. One approach, 'hash layers,' allows for larger models with fewer computational operations by using hashing for e…
-
DeepSeek previews new AI model that ‘closes the gap’ with frontier models
DeepSeek has released its V4 AI model, featuring two versions: V4-Pro and V4-Flash. These models boast a 1 million token context window and utilize a mixture-of-experts architecture for efficiency. While DeepSeek V4 aim…
-
AI research explores functorial formulations, causal learning, and adaptive model merging
Researchers have developed a multi-fidelity surrogate modeling framework to predict wind loads on container ships, combining empirical data with CFD simulations for improved accuracy and reduced computational cost. Anot…
-
New MoE Architectures Enhance Efficiency and Performance
Researchers are developing advanced techniques to improve Mixture-of-Experts (MoE) models, particularly addressing challenges in domain transitions and inference efficiency. One approach, inspired by the Free Energy Pri…
-
Cohere details how MoE models boost speculative decoding effectiveness
Cohere has released a technical report detailing how Mixture-of-Experts (MoE) models can enhance speculative decoding. Contrary to initial expectations, the research indicates that MoE architectures actually improve the…
-
DeepSeek v3 leads open-weight models, Baseten enables mission-critical inference
DeepSeek v3, a new 671B parameter Mixture-of-Experts model, has been released and is currently the top-performing open-weights model available. Serving such large models presents significant challenges, but inference st…
-
Google, DeepSeek, and arXiv papers explore agent learning and memory
DeepSeek has released two new open-weight models, V4-Pro and V4-Flash, featuring a 1 million token context window and Mixture of Experts architecture. These models are significantly larger than previous DeepSeek release…