PulseAugur
LIVE 01:49:15
ENTITY Thomas Bley

Thomas Bley

PulseAugur coverage of Thomas Bley — every cluster mentioning Thomas Bley across labs, papers, and developer communities, ranked by signal.

Total · 30d
3
3 over 90d
Releases · 30d
0
0 over 90d
Papers · 30d
0
0 over 90d
TIER MIX · 90D
SENTIMENT · 30D

1 day(s) with sentiment data

RECENT · PAGE 1/1 · 3 TOTAL
  1. TOOL · CL_26246 ·

    Local LLM Guide Updated With Qwen 3.6 and Gemma 4

    Thomas Bley has released an updated guide for running large language models locally, featuring Qwen 3.6 and Gemma 4. The setup includes configurations for permissions and different "thinking" variants, aiming to make lo…

  2. RESEARCH · CL_15141 ·

    Run LLMs locally with LFM 2 and Transformers.js, using WebGPU

    Thomas Bley has released new slides detailing how to run Large Language Models (LLMs) locally using LFM 2. The presentation also covers using Transformers.js with WebGPU for privacy filters, function calling, and embedd…

  3. RESEARCH · CL_08477 ·

    Nvidia's Nemotron 3 Nano Omni and Llama.cpp enable local LLM execution

    Thomas Bley has released new presentation slides detailing how to run large language models locally. The slides cover Nvidia's Nemotron 3 Nano Omni, built-in tools for Llama.cpp, and the use of Transformers.js with WebG…