Qwen 3.5
PulseAugur coverage of Qwen 3.5 — every cluster mentioning Qwen 3.5 across labs, papers, and developer communities, ranked by signal.
1 day(s) with sentiment data
-
Qwen 3.5 leads local LLM benchmarks after switch to llama.cpp
A technical blog post details a shift from using Ollama to llama.cpp for running large language models locally. The author found that Ollama, while user-friendly, introduced an abstraction layer that potentially skewed …
-
New MSI metric reveals nuanced bias in LLMs, with distillation reintroducing bias
Researchers have developed a new metric, the Moral Sensitivity Index (MSI), to evaluate contextual bias in large language models. This index quantifies the probability of biased output across a seven-tier stress test, m…
-
LLMs generate privacy-safe synthetic clinical reports for data augmentation
Researchers have developed a new evaluation framework to assess the quality of synthetic clinical data generated by Large Language Models (LLMs). The framework measures semantic fidelity, lexical diversity, and privacy …
-
DeepSeek releases V4, an open-source model rivaling top closed-source AI
Chinese AI firm DeepSeek has released V4, a new flagship model that offers improved efficiency and longer context windows. The model is open-source and comes in two versions: V4-Pro for complex tasks and V4-Flash for sp…
-
Google's Gemma 4 AI models now run offline on iPhones
Google's Gemma 4 models can now run directly on iPhones, enabling full offline AI inference. This development signifies a shift towards on-device AI, with smaller variants like E2B and E4B optimized for mobile efficienc…
-
TianShu Zhixin cuts inference chip prices to gain market share amid revenue concerns
Chinese AI chip designer Tianshu Zhixin reported 10.34 billion yuan in revenue for 2025, a 91.6% year-over-year increase, though this fell short of market expectations. The company's training chip series, "Tianhe," rema…
-
Google's Gemma 4 26B model runs locally with LM Studio's new headless CLI
Google's Gemma 4 model family, particularly the 26B-A4B variant, is now accessible for local inference on consumer hardware like MacBooks. This mixture-of-experts model activates only a fraction of its parameters per in…
-
Chinese AI Labs Release Frontier Models Qwen 3.5, GLM 5, and MiniMax 2.5
Several Chinese AI labs have released new flagship open-weight models, including Qwen 3.5, GLM 5, and MiniMax 2.5. These releases represent a significant push in the frontier of AI development from these organizations. …
-
AI firms launch new coding models and features, boosting efficiency and speed
Alibaba has released a new series of Qwen 3.5 models, emphasizing efficiency and performance over sheer parameter count, with notable gains in context length and tool integration. OpenAI has updated its Responses API wi…