In AI We Trust?

Miriam Vogel
undefined
Dec 7, 2022 • 44min

Joaquin Quiñonero Candela (LinkedIn): Can we meet business goals AND attain responsible AI? (spoiler: we can and must)

This week, Joaquin Quiñonero Candela (LinkedIn, formerly at Facebook and Microsoft) joins us to discuss AI storytelling; ethics by design; the imperative of diversity to create effective AI; and strategies he uses to make responsible AI a priority for the engineers he manages, policy-makers he advises, and other important stakeholders.—Materials mentioned in this episode:Technology Primer: Social Media Recommendation Algorithms (Harvard Belfer Center)Finding Solutions: Choice, Control, and Content Policies; a conversation between Karen Hao and Joaquin Quiñonero Candela hosted live by the Harvard Belfer Center
undefined
Nov 16, 2022 • 38min

Deputy Secretary Graves (DOC) answers the question: Can We Maintain Our AI Lead? (spoiler alert: We are AI Ready!)

The Department of Commerce plays a key role in the USG’s leadership in AI given the multiple ways AI is used, patented and governed by the Department. In this special episode, hear from Commerce Deputy Secretary Don Graves on how the US intends to maintain leadership in AI, including through its creation of standards to attain trustworthy AI, working with our allies and ensuring an inclusive and ready AI workforce. —Materials mentioned in this episode:Proposed Law Enforcement Principles on the Responsible Use of Facial Recognition Technology Released from the World Economic Forum Artificial Intelligence: Detecting Marine Animals with Satellites (NOAA Fisheries)
undefined
Nov 2, 2022 • 44min

Carl Hahn (NOC): When your AI reaches from the cosmos to the seafloor, and the universe in between, how can you ensure it is safe and trustworthy?

Carl Hahn, Vice President and Chief compliance officer at Northrop Grumman, one of the world’s largest military technology providers, joins us on this episode to help answer this question that he addresses daily. Carl shares his perspective on the impact of the DoD principles, how governments and companies need to align on the “how” of developing and using AI responsibly, and much more. ---------------Materials mentioned in this episode:NAIAC Field Hearing @ NIST YouTube Page“DOD Adopts 5 Principles of Artificial Intelligence Ethics” (Department of Defense)“Defense AI Technology: Worlds Apart From Commercial AI” (Northrop Grumman)Smart Toys (World Economic Forum): Smart Toy Awards
undefined
Oct 12, 2022 • 29min

Mark Brayan (Appen): For whom is your data performing?

In this episode, Mark Brayan focuses on a key ingredient for responsible AI: ethically sourced, inclusive data. Mark is the CEO and Managing director of Appen, which provides training data for thousands of machine learning and AI initiatives. Good quality data is imperative for responsible AI (garbage in, garbage out), and part of that equation is making sure that it is sourced inclusively, responsibly, and ethically. When developing and using responsible AI, it’s critically important to get your data right by asking the right questions; for whom is your data performing – and for whom could it fail?— Subscribe to catch each new episode on Apple, Spotify and all major platforms. To learn more about EqualAI, visit our website: https://www.equalai.org/ and follow us on Twitter: @ai_equal.
undefined
Sep 28, 2022 • 44min

Krishnaram Kenthapadi (Fiddler.ai): Citizen audits are coming; are you ready?

Krishnaram is the Chief Scientist of Fiddler AI, an enterprise startup building a responsible AI and Machine Learning monitoring platform. Prior to Fiddler AI, Krishnaram has served as Principal Scientist at Amazon AWS AI, on the LinkedIn AI team, and on Microsoft's AI and Ethics in Engineering and Research (AETHER) Advisory Board. In this episode, Krishanaram warns of the importance of not simply performing the important task of model validation but continuing to test it post deployment. He also highlights incentives to test your AI early and often: even without new laws in place, empowered and tech-savvy citizens are increasingly taking audits into their own hands.— Subscribe to catch each new episode on Apple, Spotify and all major platforms. To learn more about EqualAI, visit our website: https://www.equalai.org/ and follow us on Twitter: @ai_equal.
undefined
Sep 14, 2022 • 46min

Dr. Edson Prestes: Can we ingrain empathy into our AI?

Dr. Prestes, Professor of Computer Science at the Institute of Informatics, Federal University of Rio Grande do Sul and leader of the Phi Robotics Research Group. In this episode, Dr. Prestes shares his trailblazing work in international AI policy and standards, including the development of the first global AI ethics instrument. Dr. Prestes discusses ethics in technology and the infusion of empathy, as well as his focus on establishing human rights for a digital world. — Subscribe to catch each new episode on Apple, Spotify and all major platforms. To learn more about EqualAI, visit our website: https://www.equalai.org/ and follow us on Twitter: @ai_equal.
undefined
Aug 24, 2022 • 52min

Joe Bradley (LivePerson): How much 'rat poison' is in our AI and can AI be more "human"?

Joe Bradley is the Chief Scientist at LivePerson, a leading Conversational AI company creating digital experiences that are “Curiously Human”, powering nearly a billion conversational interactions monthly in their Conversational Cloud. In this episode, Joe shares the broad lens he brings to his work in AI. He discusses the interconnectedness between AI and humanity, and his work at LivePerson to develop “empathetic” AI systems to help brands better connect with their customers. Joe addresses his experience in the EqualAI Badge program and basic challenges in reducing bias in AI, from determining what to measure to whom to consider when evaluating our systems; and asks how much “rat poison” is tolerated in our cereal (AI systems).— Subscribe to catch each new episode on Apple, Spotify and all major platforms. To learn more about EqualAI, visit our website: https://www.equalai.org/ and follow us on Twitter: @ai_equal.
undefined
Jun 15, 2022 • 59min

Dr. Richard Benjamins (Telefonica): What are the key ingredients for a successful Responsible AI Framework?

Dr. Richard Benjamins is Chief AI & Data Strategist at Telefonica, author of The myth of the algorithm and A Data-Driven Company, and co-founder of OdiseIA. In this week’s episode, Richard offers his roadmap for trustworthy AI, including his company's “aspirational” approach to AI governance, their use of an ethics committee, how they use the bottom line to reinforce their goals and other best practices in designing responsible AI use.
undefined
May 27, 2022 • 52min

Beena Ammanath (Deloitte): What concrete steps companies can (must) take to achieve trustworthy AI

Beena Ammanath is Executive Director of the Global Deloitte AI Institute, author of Trustworthy AI: A Business Guide For Navigating Trust and Ethics in AI and founder of the nonprofit to increase diversity in tech, Humans for AI. In this episode, Beena explains where organizations (and others) can begin to embed AI ethics as a part of their routine business practice, the importance for policy makers and organizations alike to focus on use cases when building frameworks, and shares others lessons on how to ensure we create more inclusive, trustworthy AI. 
undefined
May 10, 2022 • 57min

Dr. Margaret Mitchell: How can we ensure AI reflects our values – and why this matters to each of us?

Dr. Margaret Mitchell is a renowned researcher who has won numerous awards for her work developing practical tools to combine ethics and machine learning. LastFall, Dr. Mitchell joined the AI startup HuggingFace ( "to democratize good machine learning") and previously research positions at Google and Microsoft. Inthis episode, Dr. Mitchell articulates numerouschallenges in the endeavor to create ethical AI. She also illuminates thedistinction between ethical and responsible AI; the necessity of ahuman-centered, inclusive approach to AI development; and the need for policymakers to understand AI. ----- Subscribe to catch each new episode on Apple, Spotify and all major platforms. To learn more about EqualAI, visit our website: https://www.equalai.org/ and follow us on Twitter: @ai_equal 

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app