A senior researcher at Google DeepMind, Alexander Lerchner, has published a paper arguing that AI, particularly large language models, can simulate but not instantiate consciousness. His work, "The Abstraction Fallacy," posits that AI systems require human input to assign meaning and cannot achieve self-awareness without biological needs and a physical body. This perspective contrasts with the more optimistic AGI timelines often promoted by figures like DeepMind CEO Demis Hassabis. AI
Summary written by gemini-2.5-flash-lite from 4 sources. How we write summaries →
IMPACT Challenges the prevailing narrative of imminent AGI, potentially influencing regulatory discussions and public perception of AI capabilities.
RANK_REASON A senior researcher at a major AI lab published a paper with an opinion that contradicts the lab's public-facing narrative.