In AI We Trust?

Miriam Vogel
undefined
Jan 11, 2023 • 34min

2022 Year in Review: Are we ready for what’s coming in AI?

In this special year-in-review edition of "In AI we Trust?", co-hosts Kay Firth-Butterfield (@KayFButterfield) and Miriam Vogel (@VogelMiriam) take a look back at the key themes and insights from their conversations. From interviews with thought leaders, government officials and senior executives in the field, we explore progress and challenges from the past year in the quest for trustworthy AI. We also look ahead to what you can expect to see and encounter, including key issues that are likely to emerge in AI in 2023. Join us as we reflect and gear up for an exciting year in the accelerated path toward game-changing and responsible AI.—Materials mentioned in this episode:Davos 2023, the World Economic ForumA 72-year-old congressman goes back to school, pursuing a degree in AI, The Washington PostBoard Responsibility for Artificial Intelligence Oversight, Miriam Vogel and Robert G. Eccles, Harvard Law School Forum on Corporate Governance5 ways to avoid artificial intelligence bias with 'responsible AI', Miriam Vogel and Kay Firth-Butterfield
undefined
Dec 19, 2022 • 46min

Dr. Suresh Venkatasubramanian (White House OSTP/Brown University): Can AI be as safe as our seatbelts?

In this episode, we are joined by Dr. Suresh Venkatasubramanian, a former official at the White House Office of Science and Technology Policy (OSTP) and CS professor at Brown, to discuss his work in the White House developing policy, including the AI Bill of Rights Blueprint. Suresh also posits on the basis for current AI challenges as failure of imagination, the need to engage diverse voices in AI development, and the evolution of safety regulations for new technologies. —Materials mentioned in this episode:Blueprint for an AI Bill of Rights (The White House)
undefined
Dec 7, 2022 • 44min

Joaquin Quiñonero Candela (LinkedIn): Can we meet business goals AND attain responsible AI? (spoiler: we can and must)

This week, Joaquin Quiñonero Candela (LinkedIn, formerly at Facebook and Microsoft) joins us to discuss AI storytelling; ethics by design; the imperative of diversity to create effective AI; and strategies he uses to make responsible AI a priority for the engineers he manages, policy-makers he advises, and other important stakeholders.—Materials mentioned in this episode:Technology Primer: Social Media Recommendation Algorithms (Harvard Belfer Center)Finding Solutions: Choice, Control, and Content Policies; a conversation between Karen Hao and Joaquin Quiñonero Candela hosted live by the Harvard Belfer Center
undefined
Nov 16, 2022 • 38min

Deputy Secretary Graves (DOC) answers the question: Can We Maintain Our AI Lead? (spoiler alert: We are AI Ready!)

The Department of Commerce plays a key role in the USG’s leadership in AI given the multiple ways AI is used, patented and governed by the Department. In this special episode, hear from Commerce Deputy Secretary Don Graves on how the US intends to maintain leadership in AI, including through its creation of standards to attain trustworthy AI, working with our allies and ensuring an inclusive and ready AI workforce. —Materials mentioned in this episode:Proposed Law Enforcement Principles on the Responsible Use of Facial Recognition Technology Released from the World Economic Forum Artificial Intelligence: Detecting Marine Animals with Satellites (NOAA Fisheries)
undefined
Nov 2, 2022 • 44min

Carl Hahn (NOC): When your AI reaches from the cosmos to the seafloor, and the universe in between, how can you ensure it is safe and trustworthy?

Carl Hahn, Vice President and Chief compliance officer at Northrop Grumman, one of the world’s largest military technology providers, joins us on this episode to help answer this question that he addresses daily. Carl shares his perspective on the impact of the DoD principles, how governments and companies need to align on the “how” of developing and using AI responsibly, and much more. ---------------Materials mentioned in this episode:NAIAC Field Hearing @ NIST YouTube Page“DOD Adopts 5 Principles of Artificial Intelligence Ethics” (Department of Defense)“Defense AI Technology: Worlds Apart From Commercial AI” (Northrop Grumman)Smart Toys (World Economic Forum): Smart Toy Awards
undefined
Oct 12, 2022 • 29min

Mark Brayan (Appen): For whom is your data performing?

In this episode, Mark Brayan focuses on a key ingredient for responsible AI: ethically sourced, inclusive data. Mark is the CEO and Managing director of Appen, which provides training data for thousands of machine learning and AI initiatives. Good quality data is imperative for responsible AI (garbage in, garbage out), and part of that equation is making sure that it is sourced inclusively, responsibly, and ethically. When developing and using responsible AI, it’s critically important to get your data right by asking the right questions; for whom is your data performing – and for whom could it fail?— Subscribe to catch each new episode on Apple, Spotify and all major platforms. To learn more about EqualAI, visit our website: https://www.equalai.org/ and follow us on Twitter: @ai_equal.
undefined
Sep 28, 2022 • 44min

Krishnaram Kenthapadi (Fiddler.ai): Citizen audits are coming; are you ready?

Krishnaram is the Chief Scientist of Fiddler AI, an enterprise startup building a responsible AI and Machine Learning monitoring platform. Prior to Fiddler AI, Krishnaram has served as Principal Scientist at Amazon AWS AI, on the LinkedIn AI team, and on Microsoft's AI and Ethics in Engineering and Research (AETHER) Advisory Board. In this episode, Krishanaram warns of the importance of not simply performing the important task of model validation but continuing to test it post deployment. He also highlights incentives to test your AI early and often: even without new laws in place, empowered and tech-savvy citizens are increasingly taking audits into their own hands.— Subscribe to catch each new episode on Apple, Spotify and all major platforms. To learn more about EqualAI, visit our website: https://www.equalai.org/ and follow us on Twitter: @ai_equal.
undefined
Sep 14, 2022 • 46min

Dr. Edson Prestes: Can we ingrain empathy into our AI?

Dr. Prestes, Professor of Computer Science at the Institute of Informatics, Federal University of Rio Grande do Sul and leader of the Phi Robotics Research Group. In this episode, Dr. Prestes shares his trailblazing work in international AI policy and standards, including the development of the first global AI ethics instrument. Dr. Prestes discusses ethics in technology and the infusion of empathy, as well as his focus on establishing human rights for a digital world. — Subscribe to catch each new episode on Apple, Spotify and all major platforms. To learn more about EqualAI, visit our website: https://www.equalai.org/ and follow us on Twitter: @ai_equal.
undefined
Aug 24, 2022 • 52min

Joe Bradley (LivePerson): How much 'rat poison' is in our AI and can AI be more "human"?

Joe Bradley is the Chief Scientist at LivePerson, a leading Conversational AI company creating digital experiences that are “Curiously Human”, powering nearly a billion conversational interactions monthly in their Conversational Cloud. In this episode, Joe shares the broad lens he brings to his work in AI. He discusses the interconnectedness between AI and humanity, and his work at LivePerson to develop “empathetic” AI systems to help brands better connect with their customers. Joe addresses his experience in the EqualAI Badge program and basic challenges in reducing bias in AI, from determining what to measure to whom to consider when evaluating our systems; and asks how much “rat poison” is tolerated in our cereal (AI systems).— Subscribe to catch each new episode on Apple, Spotify and all major platforms. To learn more about EqualAI, visit our website: https://www.equalai.org/ and follow us on Twitter: @ai_equal.
undefined
Jun 15, 2022 • 59min

Dr. Richard Benjamins (Telefonica): What are the key ingredients for a successful Responsible AI Framework?

Dr. Richard Benjamins is Chief AI & Data Strategist at Telefonica, author of The myth of the algorithm and A Data-Driven Company, and co-founder of OdiseIA. In this week’s episode, Richard offers his roadmap for trustworthy AI, including his company's “aspirational” approach to AI governance, their use of an ethics committee, how they use the bottom line to reinforce their goals and other best practices in designing responsible AI use.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app