Compliance Perspectives

Alessia Falsarone on AI Explainability [Podcast]

6 snips
Oct 23, 2025
Alessia Falsarone, a non-executive director at Innovate UK with a focus on AI governance, dives into the pressing issue of AI explainability. She discusses the urgent need for transparency in AI decision-making, which can avert crises when systems go awry. Alessia advocates for practical solutions, like dashboards and decision logs, to illuminate how AI reaches conclusions. She also addresses common misconceptions, stressing that explainability should not be viewed merely as a technical challenge, but as a cross-functional necessity.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Explainability Is A Regulatory Priority

  • Explainability makes AI decisions understandable to regulators and regular users, not just developers.
  • Singapore's Model AI Governance Framework gave practical guidance years before many Western regulators acted.
ANECDOTE

Apple Credit Card Shows Explainability Failure

  • Alessia cites the Apple credit card case where women received lower limits despite similar finances as an example of explainability failures.
  • When companies can't explain outcomes, regulators and the public lose trust and legal exposure increases.
ADVICE

Create Decision Dashboards For Transparency

  • Build dashboards that show factors influencing each AI decision and produce plain-language summaries for non-technical users.
  • Keep those decision traces updated as models learn so frontline compliance can see why outcomes occurred.
Get the Snipd Podcast app to discover more snips from this episode
Get the app