

The Road to Accountable AI
Kevin Werbach
Artificial intelligence is changing business, and the world. How can you navigate through the hype to understand AI's true potential, and the ways it can be implemented effectively, responsibly, and safely? Wharton Professor and Chair of Legal Studies and Business Ethics Kevin Werbach has analyzed emerging technologies for thirty years, and created one of the first business school course on legal and ethical considerations of AI in 2016. He interviews the experts and executives building accountable AI systems in the real world, today.
Episodes
Mentioned books

May 15, 2025 • 30min
Jaime Banks: How Users Perceive AI Companions
AI companion applications, which create interactive personas for one-on-one conversations, are incredibly popular. However, they raise a number of challenging ethical, legal, and psychological questions. In this episode, Kevin Werbach speaks with researcher Jaime Banks about how users view their conversations with AI companions, and the implications for governance. Banks shares insights from her research on mind-perception, and how AI companion users engage in a willing suspension of disbelief similar to watching a movie. She highlights both potential benefits and dangers, as well as novel issues such as the real feelings of loss users may experience when a companion app shuts down. Banks advocates for data-driven policy approaches rather than moral panic, suggesting responses such as an "AI user's Bill of Rights" for these services. Jaime Banks is Katchmar-Wilhelm Endowed Professor at the School of Information Studies at Syracuse University. Her research examines human-technological interaction, including social AI, social robots, and videogame avatars. She focuses on relational construals of mind and morality, communication processes, and how media shape our understanding of complex technologies. Her current funded work focuses on social cognition in human-AI companionship and on the effects of humanizing language on moral judgments about AI. Transcript ‘She Helps Cheer Me Up’: The People Forming Relationships With AI Chatbots (The Guardian, April 2025) Can AI Be Blamed for a Teen's Suicide? (NY Times, October 2024) Beyond ChatGPT: AI Companions and the Human Side of AI (Syracuse iSchool video)

May 8, 2025 • 37min
Kelly Trindel: AI Governance Across the Enterprise? All in a Day’s Work
In this episode, Kevin Werbach interviews Kelly Trindel, Head of Responsible AI at Workday. Although Trindel's team is housed within Workday’s legal department, it operates as a multidisciplinary group, bringing together legal, policy, data science, and product expertise. This structure helps ensure that responsible AI practices are integrated not just at the compliance level but throughout product development and deployment. She describes formal mechanisms—such as model review boards and cross-functional risk assessments—that embed AI governance into product workflows across the company. The conversation covers how Workday evaluates model risks based on context and potential human impact, especially in sensitive areas like hiring and performance evaluation. Trindel outlines how the company conducts bias testing, maintains documentation, and uses third-party audits to support transparency and trustworthiness. She also discusses how Workday is preparing for emerging regulatory frameworks, including the EU AI Act, and how internal governance systems are designed to be flexible in the face of evolving policy and technological change. Other topics include communicating AI risks to customers, sustaining post-deployment oversight, and building trust through accountability infrastructure. Dr. Kelly Trindel directs Workday’s AI governance program. As a pioneer in the responsible AI movement, Kelly has significantly contributed to the field, including testifying before the U.S. Equal Employment Opportunity Commission (EEOC) and later leading an EEOC task force on ethical AI—one of the government’s first. With more than 15 years of experience in quantitative science, civil rights, public policy, and AI ethics, Kelly’s influence and commitment to responsible AI are instrumental in driving the industry forward and fostering AI solutions that have a positive societal impact. Transcript Responsible AI: Empowering Innovation with Integrity Putting Responsible AI into Action (video masterclass)

May 1, 2025 • 36min
David Weinberger: How AI Challenges Our Fundamental Ideas
Professor Werbach interviews David Weinberger, author of several books and a long-time deep thinker on internet trends, about the broader implications of AI on how we understand and interact with the world. They examine the idea that throughout history, dominant technologies—like the printing press, the clock, or the computer—have subtly but profoundly shaped our concepts of knowledge, intelligence, and identity. Weinberger argues that AI, and especially machine learning, represents a new kind of paradigm shift: unlike traditional computing, which requires humans to explicitly encode knowledge in rules and categories, AI systems extract meaning and make predictions from vast numbers of data points without needing to understand or generalize in human terms. He describes how these systems uncover patterns beyond human comprehension—such as identifying heart disease risk from retinal scans—by finding correlations invisible to human experts. Their discussion also grapples with the disquieting implications of this shift, including the erosion of explainability, the difficulty of ensuring fairness when outcomes emerge from opaque models, and the way AI systems reflect and reinforce cultural biases embedded in the data they ingest. The episode closes with a reflection on the tension between decentralization—a value long championed in the internet age—and the current consolidation of AI power in the hands of a few large firms, as well as Weinberger’s controversial take on copyright and data access in training large models. David Weinberger is a pioneering thought-leader about technology's effect on our lives, our businesses, and ideas. He has written several best-selling, award-winning books explaining how AI and the Internet impact how we think the world works, and the implications for business and society. In addition to writing for many leading publications, he has been a writer-in-residence, twice, at Google AI groups, Editor of the Strong Ideas book series for MIT Press, a Fellow at the Harvarrd Berkman-Klein Center for Internet and Society, contributor of dozens of commentaries on NPR's All Things Considered, a strategic marketing VP and consultant, and for six years a Philosophy professor. Transcript Everyday Chaos Our Machines Now Have Knowledge We’ll Never Understand (Wired) How Machine Learning Pushes Us to Define Fairness (Harvard Business Review)

Apr 24, 2025 • 38min
Ashley Casovan: From Privacy Practice to AI Governance
Professor Werbach talks with Ashley Casavan, Managing Director of the AI Governance Center at the IAPP, the global association for privacy professional and related roles. Ashley shares how privacy, data protection, and AI governance are converging, and why professionals must combine technical, policy, and risk expertise. They discuss efforts to build a skills competency framework for AI roles and examine the evolving global regulatory landscape—from the EU’s AI Act to U.S. state-level initiatives. Drawing on Ashley’s experience in the Canadian government, the episode also explores broader societal challenges, including the need for public dialogue and the hidden impacts of automated decision-making. Ashley Casovan serves as the primary thought leader and public voice for the IAPP on AI governance. She has developed expertise in responsible AI, standards, policy, open government and data governance in the public sector at the municipal and federal levels. As the director of data and digital for the government of Canada, Casovan previously led the development of the world’s first national government policy for responsible AI. Casovan served as the Executive Director of the Responsible AI Institute, a member of OECD’s AI Policy Observatory Network of Experts, a member of the World Economic Forum's AI Governance Alliance, an Executive Board Member of the International Centre of Expertise in Montréal on Artificial Intelligence and as a member of the IFIP/IP3 Global Industry Council within the UN. Transcript Ashley Casovan IAPP IAPP AI Governance Profession Report 2025 Global AI Law and Policy Tracker Mapping and Understanding the AI Governance Ecosystem

Apr 17, 2025 • 40min
Lauren Wagner: The Potential of Private AI Governance
Kevin Werbach interviews Lauren Wagner, a builder and advocate for market-driven approaches to AI governance. Lauren shares insights from her experiences at Google and Meta, emphasizing the critical intersection of technology, policy, and trust-building. She describes the private AI governance model, and the incentives for private-sector incentives and transparency measures, such as enhanced model cards, to guide responsible AI development without heavy-handed regulation. Lauren also explores ongoing challenges around liability, insurance, and government involvement, highlighting the potential of public procurement policies to set influential standards. Reflecting on California's SB 1047 AI bill, she discusses its drawbacks and praises the inclusive debate it sparked. Lauren concludes by promoting productive collaborations between private enterprises and governments, stressing the importance of transparent, accountable, and pragmatic AI governance approaches. Lauren Wagner is a researcher, operator and investor creating new markets for trustworthy technology. She is currently a Term Member at the Council on Foreign Relations, a Technical & AI Policy Advisor to the Data & Trust Alliance, and an angel investor in startups with a trust & safety edge, particularly AI-driven solutions for regulated markets. She has been a Senior Advisor to Responsible Innovation Labs, an early-stage investor at Link Ventures, and held senior product and marketing roles at Meta and Google. Transcript AI Governance Through Markets (February 2025) How Tech Created the Online Fact-Checking Industry (March 2025) Responsible Innovation Labs Data & Trust Alliance

Apr 10, 2025 • 39min
Medha Bankhwal and Michael Chui: Implementing AI Trust
Kevin Werbach speaks with Medha Bankhwal and Michael Chui from QuantumBlack, the AI division of the global consulting firm McKinsey. They discuss how McKinsey's AI work has evolved from strategy consulting to hands-on implementation, with AI trust now embedded throughout their client engagements. Chui highlights what makes the current AI moment transformative, while Bankwhal shares insights from McKinsey's recent AI survey of over 760 organizations across 38 countries. As they explain, trust remains a major barrier to AI adoption, although there are geographic differences in AI governance maturity. Medha Bankhwal, a graduate of Wharton's MBA program, is an Associate Partner, as well as Co-founder of McKinsey’s AI Trust / Responsible AI practice. Prior to McKinsey, Medha was at Google and subsequently co-founded a digital learning not-for-profit startup. She co-leads forums for AI safety discussions for policy + tech practitioners, titled “Trustworthy AI Futures” as well as a community of ex-Googlers dedicated to the topic of AI Safety. Michael Chui is a senior fellow at QuantumBlack, AI by McKinsey. He leads research on the impact of disruptive technologies and innovation on business, the economy, and society. Michael has led McKinsey research in such areas as artificial intelligence, robotics and automation, the future of work, data & analytics, collaboration technologies, the Internet of Things, and biological technologies. Episode Transcript The State of AI: How Organizations are Rewiring to Capture Value (March 12, 2025) Superagency in the workplace: Empowering people to unlock AI’s full potential (January 28, 2025) Building AI Trust: The Key Role of Explainability (November 26, 2024) McKinsey Responsible AI Principles

Apr 3, 2025 • 38min
Eric Bradlow: AI Goes to Business School
Eric Bradlow, Vice Dean of AI & Analytics at Wharton and a professor with expertise in Economics and Data Science, discusses the transformative power of AI in business education. He highlights Wharton's innovative analytics program and the new Accountable AI Lab aimed at fostering ethical practices in AI. The conversation covers the rise of generative AI, making data analysis accessible, and the importance of integrating legal frameworks into AI initiatives. Bradlow emphasizes the balance between ethics and profitability in algorithm design, advocating for collaboration across disciplines.

Dec 12, 2024 • 35min
Wendy Gonzalez: Managing the Humans in the AI Loop
Wendy Gonzalez, CEO of Sama, has transformed the company into a leader in AI data services, emphasizing human judgment in AI development. They discuss the vital role of human workers in training and validating models, as well as the ethical implications of outsourcing labor from developing nations. Wendy highlights Sama's commitment to transparency in wages and creating opportunities for underserved communities. The conversation also tackles the importance of cultural context and diversity in AI to enhance model accuracy and trust.

Dec 5, 2024 • 35min
Jessica Lennard: AI Regulation as Part of a Growth Agenda
The UK is in a unique position in the global AI landscape. It is home to important AI development labs and corporate AI adopters, but its regulatory regime is distinct from both the US and the European Union. In this episode, Kevin Werbach sits down with Jessica Leonard, the Chief Strategy and External Affairs Officer at the UK's Competition and Markets Authority (CMA). Jessica discusses the CMA's role in shaping AI policy against the backdrop of a shifting political and economic landscape, and how it balances promoting innovation with competition and consumer protection. She highlights the guiding principles that the CMA has established to ensure a fair and competitive AI ecosystem, and how they are designed to establish trust and fair practices across the industry. Jessica Lennard took up the role of Chief Strategy & External Affairs Officer at the CMA in August 2023. Jessica is a member of the Senior Executive Team, an advisor to the Board, and has overall responsibility for Strategy, Communications and External Engagement at the CMA. Previously, she was a Senior Director for Global Data and AI Initiatives at VISA. She also served as an Advisory Board Member for the UK Government Centre for Data Ethics and Innovation. Competition and Markets Authority CMA AI Strategic Update (April 2024)

Nov 21, 2024 • 27min
Tim O'Reilly: The Values of AI Disclosure
Tim O'Reilly, the influential founder of O'Reilly Media and co-leader of the AI Disclosures Project, shares his insights on AI governance. He delves into the evolution of AI, criticizing its current centralization and advocating for a more decentralized, transparent approach. O'Reilly emphasizes the urgency of robust regulatory frameworks to ensure fairness and safety in AI. The discussion highlights the balance between innovation and accountability, exploring opportunities and risks as the technology rapidly evolves.