PulseAugur
LIVE 06:42:13
ENTITY 16GB VRAM

16GB VRAM

PulseAugur coverage of 16GB VRAM — every cluster mentioning 16GB VRAM across labs, papers, and developer communities, ranked by signal.

Total · 30d
1
1 over 90d
Releases · 30d
0
0 over 90d
Papers · 30d
0
0 over 90d
TIER MIX · 90D
RECENT · PAGE 1/1 · 1 TOTAL
  1. RESEARCH · CL_03569 ·

    Quantized Qwen3.6-27B model achieves 100k context on 16GB VRAM

    A user on Reddit's r/LocalLLaMA has detailed a method for running the Qwen3.6-27B model on a system with 16GB of VRAM, achieving a context length of 100,000 tokens. The process involves creating a custom GGUF quantizati…