PulseAugur
LIVE 01:33:50
research · [2 sources] · · 한국어(KO) BOOTOSHI (@KingBootoshi) 오픈소스 AI 모델의 성능이 크게 향상되어 이제 대기업 계열사 수준의 모델이 없어도 충분하다는 반응이다. DeepSeek V4와 Qwen 3.6의 평가, 데모, 논의가 인상적이며, 오픈소스 모델의 실사용 가능성이 높아졌음을 강조한다. http
0
research

Open-source AI models surge, while a private 20T-parameter model hints at future scale

Open-source AI models are demonstrating significant performance improvements, with DeepSeek V4 and Qwen 3.6 showing capabilities that rival those of large corporate-backed models. This advancement increases the practical usability of open-source alternatives. Separately, a 20 trillion parameter model named 'Mythical' has reportedly achieved perfect scores on various benchmarks, though its creators have opted not to release it due to its immense power and cost. AI

Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →

IMPACT Open-source models are closing the gap with proprietary systems, potentially lowering barriers to advanced AI adoption.

RANK_REASON The cluster discusses advancements in open-source AI models and the reported development of a large, unreleased model, fitting the research category.

Read on Mastodon — fosstodon.org →

COVERAGE [2]

  1. Mastodon — fosstodon.org TIER_1 한국어(KO) · [email protected] ·

    BOOTOSHI (@KingBootoshi) responds that the performance of open-source AI models has significantly improved, making it sufficient without models from large corporate affiliates. The evaluations, demos, and discussions of DeepSeek V4 and Qwen 3.6 are impressive, emphasizing the increased usability of open-source models. http

    BOOTOSHI (@KingBootoshi) 오픈소스 AI 모델의 성능이 크게 향상되어 이제 대기업 계열사 수준의 모델이 없어도 충분하다는 반응이다. DeepSeek V4와 Qwen 3.6의 평가, 데모, 논의가 인상적이며, 오픈소스 모델의 실사용 가능성이 높아졌음을 강조한다. https:// x.com/KingBootoshi/status/2049 000407968715121 # opensource # llm # deepseek # qwen # ai

  2. Mastodon — fosstodon.org TIER_1 한국어(KO) · [email protected] ·

    A tweet from Bindu Reddy (@bindureddy) stating that they trained a 'Mythical' model with 20 trillion parameters. They claim it achieved perfect scores on multiple benchmarks but will not release it due to its immense power and cost. This suggests the potential for next-generation super-large models. https://

    Bindu Reddy (@bindureddy) 20조 파라미터 규모의 ‘Mythical’ 모델을 학습했다고 밝힌 트윗이다. 여러 벤치마크에서 만점 수준이라고 주장하지만, 너무 강력하고 비용이 커서 공개하지 않겠다고 했다. 차세대 초거대 모델 가능성을 암시하는 내용이다. https:// x.com/bindureddy/status/204888 8459272851490 # llm # foundationmodel # benchmark # ai # modelrelease