The Practical AI podcast featured a discussion on enhancing AI safety and interpretability beyond traditional input/output filters. Host Daniel Whitenack and guest Alizishaan Khatri, founder of Wrynx, explored how model-native, runtime signals can create more secure AI systems. Khatri shared his experience building safety infrastructure at Meta and fraud prevention systems at a previous role, highlighting the realization that AI models themselves are vulnerable to abuse. AI
Summary written by gemini-2.5-flash-lite from 1 source. How we write summaries →
RANK_REASON Podcast discussing AI safety and interpretability, featuring an industry expert.