In AI We Trust? cover image

In AI We Trust?

Latest episodes

undefined
Nov 2, 2022 • 44min

Carl Hahn (NOC): When your AI reaches from the cosmos to the seafloor, and the universe in between, how can you ensure it is safe and trustworthy?

Carl Hahn, Vice President and Chief compliance officer at Northrop Grumman, one of the world’s largest military technology providers, joins us on this episode to help answer this question that he addresses daily. Carl shares his perspective on the impact of the DoD principles, how governments and companies need to align on the “how” of developing and using AI responsibly, and much more. ---------------Materials mentioned in this episode:NAIAC Field Hearing @ NIST YouTube Page“DOD Adopts 5 Principles of Artificial Intelligence Ethics” (Department of Defense)“Defense AI Technology: Worlds Apart From Commercial AI” (Northrop Grumman)Smart Toys (World Economic Forum): Smart Toy Awards
undefined
Oct 12, 2022 • 29min

Mark Brayan (Appen): For whom is your data performing?

In this episode, Mark Brayan focuses on a key ingredient for responsible AI: ethically sourced, inclusive data. Mark is the CEO and Managing director of Appen, which provides training data for thousands of machine learning and AI initiatives. Good quality data is imperative for responsible AI (garbage in, garbage out), and part of that equation is making sure that it is sourced inclusively, responsibly, and ethically. When developing and using responsible AI, it’s critically important to get your data right by asking the right questions; for whom is your data performing – and for whom could it fail?— Subscribe to catch each new episode on Apple, Spotify and all major platforms. To learn more about EqualAI, visit our website: https://www.equalai.org/ and follow us on Twitter: @ai_equal.
undefined
Sep 28, 2022 • 44min

Krishnaram Kenthapadi (Fiddler.ai): Citizen audits are coming; are you ready?

Krishnaram is the Chief Scientist of Fiddler AI, an enterprise startup building a responsible AI and Machine Learning monitoring platform. Prior to Fiddler AI, Krishnaram has served as Principal Scientist at Amazon AWS AI, on the LinkedIn AI team, and on Microsoft's AI and Ethics in Engineering and Research (AETHER) Advisory Board. In this episode, Krishanaram warns of the importance of not simply performing the important task of model validation but continuing to test it post deployment. He also highlights incentives to test your AI early and often: even without new laws in place, empowered and tech-savvy citizens are increasingly taking audits into their own hands.— Subscribe to catch each new episode on Apple, Spotify and all major platforms. To learn more about EqualAI, visit our website: https://www.equalai.org/ and follow us on Twitter: @ai_equal.
undefined
Sep 14, 2022 • 46min

Dr. Edson Prestes: Can we ingrain empathy into our AI?

Dr. Prestes, Professor of Computer Science at the Institute of Informatics, Federal University of Rio Grande do Sul and leader of the Phi Robotics Research Group. In this episode, Dr. Prestes shares his trailblazing work in international AI policy and standards, including the development of the first global AI ethics instrument. Dr. Prestes discusses ethics in technology and the infusion of empathy, as well as his focus on establishing human rights for a digital world. — Subscribe to catch each new episode on Apple, Spotify and all major platforms. To learn more about EqualAI, visit our website: https://www.equalai.org/ and follow us on Twitter: @ai_equal.
undefined
Aug 24, 2022 • 52min

Joe Bradley (LivePerson): How much 'rat poison' is in our AI and can AI be more "human"?

Joe Bradley is the Chief Scientist at LivePerson, a leading Conversational AI company creating digital experiences that are “Curiously Human”, powering nearly a billion conversational interactions monthly in their Conversational Cloud. In this episode, Joe shares the broad lens he brings to his work in AI. He discusses the interconnectedness between AI and humanity, and his work at LivePerson to develop “empathetic” AI systems to help brands better connect with their customers. Joe addresses his experience in the EqualAI Badge program and basic challenges in reducing bias in AI, from determining what to measure to whom to consider when evaluating our systems; and asks how much “rat poison” is tolerated in our cereal (AI systems).— Subscribe to catch each new episode on Apple, Spotify and all major platforms. To learn more about EqualAI, visit our website: https://www.equalai.org/ and follow us on Twitter: @ai_equal.
undefined
Jun 15, 2022 • 59min

Dr. Richard Benjamins (Telefonica): What are the key ingredients for a successful Responsible AI Framework?

Dr. Richard Benjamins is Chief AI & Data Strategist at Telefonica, author of The myth of the algorithm and A Data-Driven Company, and co-founder of OdiseIA. In this week’s episode, Richard offers his roadmap for trustworthy AI, including his company's “aspirational” approach to AI governance, their use of an ethics committee, how they use the bottom line to reinforce their goals and other best practices in designing responsible AI use.
undefined
May 27, 2022 • 52min

Beena Ammanath (Deloitte): What concrete steps companies can (must) take to achieve trustworthy AI

Beena Ammanath is Executive Director of the Global Deloitte AI Institute, author of Trustworthy AI: A Business Guide For Navigating Trust and Ethics in AI and founder of the nonprofit to increase diversity in tech, Humans for AI. In this episode, Beena explains where organizations (and others) can begin to embed AI ethics as a part of their routine business practice, the importance for policy makers and organizations alike to focus on use cases when building frameworks, and shares others lessons on how to ensure we create more inclusive, trustworthy AI. 
undefined
May 10, 2022 • 57min

Dr. Margaret Mitchell: How can we ensure AI reflects our values – and why this matters to each of us?

Dr. Margaret Mitchell is a renowned researcher who has won numerous awards for her work developing practical tools to combine ethics and machine learning. LastFall, Dr. Mitchell joined the AI startup HuggingFace ( "to democratize good machine learning") and previously research positions at Google and Microsoft. Inthis episode, Dr. Mitchell articulates numerouschallenges in the endeavor to create ethical AI. She also illuminates thedistinction between ethical and responsible AI; the necessity of ahuman-centered, inclusive approach to AI development; and the need for policymakers to understand AI. ----- Subscribe to catch each new episode on Apple, Spotify and all major platforms. To learn more about EqualAI, visit our website: https://www.equalai.org/ and follow us on Twitter: @ai_equal 
undefined
Apr 26, 2022 • 37min

Rep. Don Beyer (D-VA): Can the U.S. Congress Create Legislative Frameworks to Support AI Development (and should it)?

Rep. Don Beyer (D-VA) is Chair of Congress' Joint Economic Committee and serves on the Ways and Means and the Science, Space and Technology Committees, as well as a member of the AI Caucus- and in his spare time, he is pursuing a Masters Degree in Artificial Intelligence. In this episode, Rep. Beyer explains his enthusiasm for AI and the opportunities it presents to enhance human life -- (e.g., better understanding and treating long covid and preserving life in suicide prevention)-- and the potential harms he is concerned about, as well as the ability of the US Congress to appropriately address these challenges. 
undefined
Apr 14, 2022 • 47min

Mira Lane (Microsoft): Can compassion lead to better AI?

Mira Lane, a a polymath, technologist and artist, is the head of Ethics & Society at Microsoft, a multidisciplinary group responsible for guiding AI innovation that leads to ethical, responsible, and sustainable outcomes. In this episode, she shares how the culture at Microsoft includes compassion in AI development to the benefit of their AI products, how she changes the perception of responsible AI from a tax to a value-add and how games can play a role in achieving this goal.----- Subscribe to catch each new episode on Apple, Spotify and all major platforms. To learn more about EqualAI, visit our website: https://www.equalai.org/ and follow us on Twitter: @ai_equal

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode