PulseAugur
LIVE 13:12:42
commentary · [1 source] ·
0
commentary

Fast.ai's Jeremy Howard advocates for continued pre-training over fine-tuning

Jeremy Howard of Fast.ai, a prominent voice in machine learning, discussed the evolution of fine-tuning techniques in a recent podcast. He highlighted how his 2018 ULMFiT paper, which demonstrated the effectiveness of fine-tuning pre-trained language models, was initially met with skepticism. Despite the current widespread adoption of fine-tuning, Howard suggests that the approach may be flawed due to issues like catastrophic forgetting and memorization. AI

Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →

RANK_REASON The item is a podcast discussion featuring a prominent researcher's opinion on AI techniques, rather than a new model release or research paper.

Read on Latent Space Podcast →

Fast.ai's Jeremy Howard advocates for continued pre-training over fine-tuning

COVERAGE [1]

  1. Latent Space Podcast TIER_1 · Latent.Space ·

    The End of Finetuning — with Jeremy Howard of Fast.ai

    <p><em>Thanks to the </em><a href="https://www.youtube.com/@aidotengineer" target="_blank"><em>over 17,000 people</em></a><em> who have joined the first AI Engineer Summit! A full recap is coming. Last call to fill out </em><a href="https://www.surveymonkey.com/r/aiengineering202…