PulseAugur
LIVE 13:06:47
commentary · [2 sources] ·
0
commentary

AI experts discuss explainability and accountability in generative AI

Two recent podcast episodes delve into the critical area of AI explainability, particularly in the context of generative AI applications. The first episode features Beth Rudden discussing an ontological approach to creating conversational AI, addressing risks and accountability in thin UI wrappers around large models. The second episode highlights Sheldon Fernandez's work on generative synthesis, which aims to produce compact and explainable neural networks, drawing parallels to AutoML and meta-learning. AI

Summary written by gemini-2.5-flash-lite from 2 sources. How we write summaries →

RANK_REASON Podcast episodes discussing AI explainability and generative synthesis, featuring expert opinions and research concepts.

Read on Practical AI →

AI experts discuss explainability and accountability in generative AI

COVERAGE [2]

  1. Practical AI TIER_1 · Practical AI LLC ·

    Explainable AI that is accessible for all humans

    <p>We are seeing an explosion of AI apps that are (at their core) a thin UI on top of calls to OpenAI generative models. What risks are associated with this sort of approach to AI integration, and is explainability and accountability something that can be achieved in chat-based a…

  2. Practical AI TIER_1 · Practical AI LLC ·

    Explaining AI explainability

    <p>The CEO of Darwin AI, Sheldon Fernandez, joins Daniel to discuss generative synthesis and its connection to explainability. You might have heard of AutoML and meta-learning. Well, generative synthesis tackles similar problems from a different angle and results in compact, expl…