PulseAugur
LIVE 08:24:21
research · [1 source] ·
0
research

Qwen 35B model outperforms 27B on coding tasks, offering 8x speed boost

A user on Reddit's r/LocalLLaMA shared a benchmark comparing two versions of the Qwen 3.6 model on a MacBook Pro with an M5 Pro chip and 64GB of RAM. The 35B A3B model, using a 4-bit quantization, significantly outperformed the 27B UD model, which used 6-bit quantization, in both speed and coding task quality. Despite the 35B model being smaller and using less RAM, it was approximately 8 times faster and achieved a higher overall score in a 4-task coding benchmark. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

IMPACT Provides real-world performance data for running local LLMs on Apple Silicon, aiding hardware and model selection for users.

RANK_REASON User-generated benchmark comparing two model versions on specific hardware.

Read on r/LocalLLaMA →

COVERAGE [1]

  1. r/LocalLLaMA TIER_1 · /u/skyyyy007 ·

    Qwen 3.6 35b a3b Q4 vs qwen 3.6 27b q6, on m5 pro 64gb

    <!-- SC_OFF --><div class="md"><p>Tried to test the two versions of models in my own m5 pro 64, curated the results on claude, not an expert so settings/config might not be the best. do share what results or improvements that can be attempted. test prompts were generated in claud…