Practical AI

When AI goes wrong

Sep 14, 2020
Join Andrew Burt, managing partner at BNH.ai with a rich background in AI law, and Patrick Hall, principal scientist specializing in trustworthy AI, as they dive into the complexities of AI failure. They discuss the urgent need for robust incident response plans and the unique liabilities that AI introduces. Practical insights on debugging models, navigating ethical frameworks, and addressing privacy concerns highlight the importance of collaboration between legal and technical teams. Buckle up for an enlightening discussion on the future of responsible AI!
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

AI Risk Magnification

  • AI's strength, scaling decisions, magnifies risks like bias and bugs.
  • Ethical AI frameworks lack enforcement; practical solutions are underutilized.
INSIGHT

Government Attention to AI Failures

  • Governments are noticing AI failures and reacting.
  • Productized tech exists to improve AI trustworthiness and transparency.
ADVICE

Bridging the Gap Between Policy and Implementation

  • Bridge the gap between policy and implementation by starting with existing regulatory documents and guidance, some decades old.
  • Tried-and-tested methods for managing AI risks are underappreciated due to lack of communication between legal and technical teams.
Get the Snipd Podcast app to discover more snips from this episode
Get the app