PulseAugur
LIVE 11:27:19
ENTITY GGML_TYPE_MXFP4

GGML_TYPE_MXFP4

PulseAugur coverage of GGML_TYPE_MXFP4 — every cluster mentioning GGML_TYPE_MXFP4 across labs, papers, and developer communities, ranked by signal.

Total · 30d
1
1 over 90d
Releases · 30d
0
0 over 90d
Papers · 30d
0
0 over 90d
TIER MIX · 90D
RECENT · PAGE 1/1 · 1 TOTAL
  1. RESEARCH · CL_03577 ·

    llama.cpp and ik_llama.cpp add FP4 inference support for VRAM savings

    The llama.cpp and ik_llama.cpp projects have both integrated support for FP4 (4-bit floating-point) inference, a significant advancement for model quantization. llama.cpp now includes NVFP4, an Nvidia-specific format, w…