PulseAugur
LIVE 11:03:11
tool · [1 source] ·
0
tool

User details Qwen 3.6 35B-A3B model setup for coding on M2 Macbook Pro

A user has successfully configured the Qwen 3.6 35B-A3B model to run locally on a 32GB RAM M2 Macbook Pro for coding tasks. The setup involves building the llama.cpp software from source and downloading specific model and vision adapter files from Hugging Face. The user provides detailed instructions and command-line arguments for running the model, emphasizing the need to close other applications to manage memory constraints. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

IMPACT Enables local execution of a capable coding LLM on consumer-grade hardware, reducing reliance on cloud services.

RANK_REASON User-provided field report on running a specific LLM locally on consumer hardware.

Read on r/LocalLLaMA →

User details Qwen 3.6 35B-A3B model setup for coding on M2 Macbook Pro

COVERAGE [1]

  1. r/LocalLLaMA TIER_1 · /u/boutell ·

    Field report: coding with Qwen 3.6 35B-A3B on an M2 Macbook Pro with 32GB RAM

    <table> <tr><td> <a href="https://www.reddit.com/r/LocalLLaMA/comments/1svdep5/field_report_coding_with_qwen_36_35ba3b_on_an_m2/"> <img alt="Field report: coding with Qwen 3.6 35B-A3B on an M2 Macbook Pro with 32GB RAM" src="https://preview.redd.it/6jkn4u8okcxg1.png?width=140&amp…