The Road to Accountable AI cover image

The Road to Accountable AI

Latest episodes

undefined
Apr 3, 2025 • 38min

Eric Bradlow: AI Goes to Business School

Eric Bradlow, Vice Dean of AI & Analytics at Wharton and a professor with expertise in Economics and Data Science, discusses the transformative power of AI in business education. He highlights Wharton's innovative analytics program and the new Accountable AI Lab aimed at fostering ethical practices in AI. The conversation covers the rise of generative AI, making data analysis accessible, and the importance of integrating legal frameworks into AI initiatives. Bradlow emphasizes the balance between ethics and profitability in algorithm design, advocating for collaboration across disciplines.
undefined
Dec 12, 2024 • 35min

Wendy Gonzalez: Managing the Humans in the AI Loop

Wendy Gonzalez, CEO of Sama, has transformed the company into a leader in AI data services, emphasizing human judgment in AI development. They discuss the vital role of human workers in training and validating models, as well as the ethical implications of outsourcing labor from developing nations. Wendy highlights Sama's commitment to transparency in wages and creating opportunities for underserved communities. The conversation also tackles the importance of cultural context and diversity in AI to enhance model accuracy and trust.
undefined
Dec 5, 2024 • 35min

Jessica Lennard: AI Regulation as Part of a Growth Agenda

The UK is in a unique position in the global AI landscape. It is home to important AI development labs and corporate AI adopters, but its regulatory regime is distinct from both the US and the European Union. In this episode, Kevin Werbach sits down with Jessica Leonard, the Chief Strategy and External Affairs Officer at the UK's Competition and Markets Authority (CMA). Jessica discusses the CMA's role in shaping AI policy against the backdrop of a shifting political and economic landscape, and how it balances promoting innovation with competition and consumer protection. She highlights the guiding principles that the CMA has established to ensure a fair and competitive AI ecosystem, and how they are designed to establish trust and fair practices across the industry. Jessica Lennard took up the role of Chief Strategy & External Affairs Officer at the CMA in August 2023. Jessica is a member of the Senior Executive Team, an advisor to the Board, and has overall responsibility for Strategy, Communications and External Engagement at the CMA. Previously, she was a Senior Director for Global Data and AI Initiatives at VISA. She also served as an Advisory Board Member for the UK Government Centre for Data Ethics and Innovation.  Competition and Markets Authority CMA AI Strategic Update (April 2024)
undefined
Nov 21, 2024 • 27min

Tim O'Reilly: The Values of AI Disclosure

Tim O'Reilly, the influential founder of O'Reilly Media and co-leader of the AI Disclosures Project, shares his insights on AI governance. He delves into the evolution of AI, criticizing its current centralization and advocating for a more decentralized, transparent approach. O'Reilly emphasizes the urgency of robust regulatory frameworks to ensure fairness and safety in AI. The discussion highlights the balance between innovation and accountability, exploring opportunities and risks as the technology rapidly evolves.
undefined
Nov 14, 2024 • 35min

Alice Xiang: Connecting Research and Practice for Responsible AI

Join Professor Werbach in his conversation with Alice Xiang, Global Head of AI Ethics at Sony and Lead Research Scientist at Sony AI. With both a research and corporate background, Alice provides an inside look at how her team integrates AI ethics across Sony's diverse business units. She explains how the evolving landscape of AI ethics is both a challenge and an opportunity for organizations to reposition themselves as the world embraces AI. Alice discusses fairness, bias, and incorporating these ethical ideas in practical business environments. She emphasizes the importance of collaboration, transparency, and diveristy in embedding a culture of accountable AI at Sony, showing other organizations how they can do the same.  Alice Xiang manages the team responsible for conducting AI ethics assessments across Sony's business units and implementing Sony's AI Ethics Guidelines. She also recently served as a General Chair for the ACM Conference on Fairness, Accountability, and Transparency (FAccT), the premier multidisciplinary research conference on these topics. Alice previously served on the leadership team of the Partnership on AI. She was a Visiting Scholar at Tsinghua University’s Yau Mathematical Sciences Center, where she taught a course on Algorithmic Fairness, Causal Inference, and the Law. Her work has been quoted in a variety of high profile journals and published in top machine learning conferences, journals, and law reviews.  Sony AI Flagship Project Augmented Datasheets for Speech Datasets and Ethical Decision-Making by Alice Xiang and Others  
undefined
Nov 7, 2024 • 38min

Krishna Gade: Observing AI Explainability...and Explaining AI Observability

Kevin Werbach speaks with Krishna Gade, founder and CEO of Fiddler AI, on the the state of explainability for AI models. One of the big challenges of contemporary AI is understanding just why a system generated a certain output. Fiddler is one of the startups offering tools that help developers and deployers of AI understand what exactly is going on.  In the conversation, Kevin and Krishna explore the importance of explainability in building trust with consumers, companies, and developers, and then dive into the mechanics of Fiddler's approach to the problem. The conversation covers current and potential regulations that mandate or incentivize explainability, and the prospects for AI explainability standards as AI models grow in complexity. Krishna distinguishes explainability from the broader process of observability, including the necessity of maintaining model accuracy through different times and contexts. Finally, Kevin and Krishna discuss the need for proactive AI model monitoring to mitigate business risks and engage stakeholders.  Krishna Gade is the founder and CEO of Fiddler AI, an AI Observability startup, which focuses on monitoring, explainability, fairness, and governance for predictive and generative models. An entrepreneur and engineering leader with strong technical experience in creating scalable platforms and delightful products,Krishna previously held senior engineering leadership roles at Facebook, Pinterest, Twitter, and Microsoft. At Facebook, Krishna led the News Feed Ranking Platform that created the infrastructure for ranking content in News Feed and powered use-cases like Facebook Stories and user recommendations.   Fiddler.Ai How Explainable AI Keeps Decision-Making Algorithms Understandable, Efficient, and Trustworthy - Krishna Gade x Intelligent Automation Radio    
undefined
Oct 31, 2024 • 37min

Angela Zhang: What’s Really Happening with AI (and AI Governance) in China

This week, Professor Werbach is joined by USC Law School professor Angela Zhang, an expert on China's approach to the technology sector. China is both one of the world's largest markets and home to some of the world's leading tech firms, as well as an active ecosystem of AI developers. Yet its relationship to the United States has become increasingly tense. Many in the West see a battle between the US and China to dominate AI, with significant geopolitical implications. In the episodoe, Zhang discusses China’s rapidly evolving tech and AI landscape, and the impact of government policies on its development. She dives into what the Chinese government does and doesn’t do in terms of AI regulation, and compares Chinese practices to those in the West. Kevin and Angela consider the implications of US export controls on AI-related technologies, along with the potential for cooperation between the US and China in AI governance. Finally, they look toward the future of Chinese AI including its progress and potential challenges.  Angela Huyue Zhang is a Professor of Law at the Gould School of Law  of the University of Southern California. She is the author of Chinese Antitrust Exceptionalism: How the Rise of China Challenges Global Regulation which was named one of the Best Political Economy Books of the Year by ProMarket in 2021. Her second book, High Wire: How China Regulates Big Tech and Governs Its Economy, released in March 2024, has been covered in The New York Times, Bloomberg, Wire China, MIT Tech Review and many other international news outlets.    High Wire: How China Regulates Big Tech and Governs Its Economy  The Promise and Perils of China's Regulation of Artificial Intelligence Angela Zhang’s Website   Want to learn more? ​​Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.  
undefined
Oct 24, 2024 • 35min

Shae Brown: AI Auditing Gets Real

Professor Werbach speaks with Shea Brown, founder of AI auditing firm BABL AI. Brown discusses how his work as an astrophysicist led him to and machine learning, and then to the challenge of evaluating AI systems. He explains the skills needed for effective AI auditing and what makes a robust AI audit. Kevin and Shae talk about the growing landscape of AI auditing services and the strategic role of specialized firms like BABL AI. They examine the evolving standards and regulations surrounding AI auditing from local laws to US government initiatives to the European Union's AI Act. Finally, Kevin and Shae discuss the future of AI auditing, emphasizing the importance of independence.  Shea Brown, the founder and CEO of BABL AI, is a researcher, speaker, consultant in AI ethics, and former associate professor of instruction in Astrophysics at the University of Iowa. Founded in 2018, BABL AI has audited and certified AI systems, consulted on responsible AI best practices, and offered online education on related topics. BABL AI’s overall mission is to ensure that all algorithms are developed, deployed, and governed in ways that prioritize human flourishing. Shea is a founding member of the International Association of Algorithmic Auditors (IAAA). BABL.ai International Association of Algorithmic Auditors NYC Local Law 144: Automated Employment Decision Tools (AEDT) Want to learn more? ​​Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.  
undefined
Oct 17, 2024 • 39min

Kevin Bankston: The Value of Open AI Models

This week, Professor Werbach is joined by Kevin Bankston, Senior Advisor on AI Governance for the Center for Democracy & Technology, to discuss the benefits and risks of open weight frontier AI models. They discuss the meaning of open foundation models, how they relate to open source software, how such models could accelerate technological advancement, and the debate over their risks and need for restrictions. Bankston discusses the National Telecommunications and Information Administration's recent recommendations on open weight models, and CDT's response to the request for comments. Bankston also shares insights based on his prior work as AI Policy Director at Meta, and discusses national security concerns around China's ability to exploit open AI models.  Kevin Bankston is Senior Advisor on AI Governance for the Center for Democracy & Technology, supporting CDT’s AI Governance Lab. In addition to a prior term as Director of CDT’s Free Expression Project, he has worked on internet privacy and related policy issues at the American Civil Liberties Union, Electronic Frontier Foundation, the Open Technology Institute, and Meta Platfrms. He was named by Washingtonian magazine as one of DC’s 100 top tech leaders of 2017. Kevin serves as an adjunct professor at the Georgetown University Law Center, where he teaches on the emerging law and policy around generative AI.  CDT Comments to NTIA on Open Foundation Models by Kevin Bankston  CDT Submits Comment on AISI's Draft Guidance, "Managing Misuse Risk for Dual-Use Foundation Models" Want to learn more? ​​Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.  
undefined
Oct 10, 2024 • 37min

Lara Abrash: How Organizations Can Meet the AI Challenge

In this episode, Professor Kevin Werbach sits with Lara Abrash, Chair of Deloitte US. Lara and Kevin discuss the complexities of integrating generative AI systems into companies and aligning stakeholders in making AI trustworthy. They discuss how to address bias, and the ways Deloitte promotes trust throughout its organization. Lara explains the role and technological expertise of boards, company risk management, and the global regulatory environment. Finally, Lara discusses the ways in which Deloitte handles both its people and the services they provide.  Lara Abrash is the Chair of Deloitte US, leading the Board of Directors in governing all aspects of the US Firm. Overseeing over 170,000 employees, Lara is a member of Deloitte’s Global Board of Directors and Chair of the Deloitte Foundation. Lara stepped into this role after serving as the chief executive officer of the Deloitte US Audit & Assurance business. Lara frequently speaks on topics focused on advancing the profession including modern leadership traits, diversity, equity, and inclusion, the future of work, and tech disruption. She is a member of the American Institute of Certified Public Accountants and received her MBA from Baruch College.  Deloitte’s Trustworthy AI Framework Deloitte’s 2024 Ethical Technology Report Want to learn more? ​​Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app