PulseAugur
LIVE 07:23:01
ENTITY Qwen3-30B-A3B-Instruct-2507

Qwen3-30B-A3B-Instruct-2507

PulseAugur coverage of Qwen3-30B-A3B-Instruct-2507 — every cluster mentioning Qwen3-30B-A3B-Instruct-2507 across labs, papers, and developer communities, ranked by signal.

Total · 30d
1
1 over 90d
Releases · 30d
0
0 over 90d
Papers · 30d
1
1 over 90d
TIER MIX · 90D
RECENT · PAGE 1/1 · 1 TOTAL
  1. RESEARCH · CL_13954 ·

    Liquid AI releases LFM2-24B-A2B, an efficient 24B parameter MoE model

    Liquid AI has released an early checkpoint of its LFM2-24B-A2B model, a sparse Mixture of Experts (MoE) architecture with 24 billion total parameters and 2 billion active parameters per token. This model demonstrates th…