In AI We Trust? cover image

In AI We Trust?

Latest episodes

undefined
Sep 23, 2021 • 43min

Taka Ariga and Stephen Sanford: What is the U.S. GAO's AI Framework?

Taka Ariga is the first Chief Data Scientist and Director of the Innovation Lab at the U.S. Government Accountability Office (GAO). Stephen Sanford is the Managing Director in GAO’s Strategic Planning and External Liaison team. Taka and Stephen are the authors of the GAO's recently released AI Framework, one of the first resources provided by the U.S. government to help identify best practices and the principles to deploy, monitor and evaluate AI responsibly. In this episode, we ask the AI Framework authors why they took on this initiative and lessons learned that are broadly applicable across industry. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal
undefined
Sep 16, 2021 • 41min

Vilas Dhar: How can civil society shape a positive, human-centric future for AI?

Vilas Dhar is a technologist, lawyer, and human rights advocate championing a new social compact for the digital age. As President and Trustee of the Patrick J. McGovern Foundation, he is a global leader in advancing artificial intelligence and data solutions to create a thriving, equitable, and sustainable future for all. In this episode we ask Vilas how he arrived at the intersection of AI and philanthropy, and how he thinks philanthropists and civil society can shape a more inclusive and societally beneficial future for AI. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal
undefined
Aug 24, 2021 • 36min

Steve Mills: How can companies walk the walk on responsible AI?

Steve Mills is a Partner at Boston Consulting Group (BCG), where he serves as Chief AI Ethics Officer and the Global Lead for Artificial Intelligence in the Public Sector. He has worked with dozens of leading companies and government agencies to improve their AI practices, and in this episode he shares some of the key lessons he has learned about how organizations can translate their ethical AI commitments into practical, meaningful actions. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal
undefined
Aug 18, 2021 • 43min

Julia Stoyanovich: Can AI systems operate fairly within complex, diverse societies?

Julia Stoyanovich is an Assistant Professor in the Department of Computer Science and Engineering at NYU’s Tandon School of Engineering, where she is also the Director of the Center for Responsible AI. Her research focuses on responsible data management and analysis and on practical tools for operationalizing fairness, diversity, transparency, and data protection in all stages of data acquisition and processing. In addition to conducting field-leading research and teaching, Professor Stoyanovich has written several comics aimed at communicating complex AI issues to diverse audiences. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal
undefined
Aug 10, 2021 • 37min

Oren Etzioni: Why is the term "machine learning" a misnomer?

Dr. Oren Etzioni is Chief Executive Officer at AI2, the Allen Institute for AI, a non-profit that offers foundational research, applied research and user-facing products. He is Professor Emeritus at University of Washington and a Venture Partner at the Madrona Venture Group. He has won numerous awards and founded several companies, has written over 100 technical papers, and provides commentary on AI for The New York Times, Wired, and Nature. In this episode, Oren explains why “machine learning” is a misnomer and some of the exciting AI innovations he is supporting that will result in greater inclusivity. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal
undefined
Aug 5, 2021 • 26min

Alexandra Givens: What makes tech to social justice issue of our time?

Alexandra Reeve Givens is the President & CEO of the Center for Democracy and Technology (CDT). She is an advocate for using technology to increase equality, amplify voices, and promote human rights. Previously, Alexandra served as the founding Executive Director of the Institute for Technology Law & Policy at Georgetown Law, served as Chief Counsel for IP and Antitrust on the Senate Judiciary Committee and began her career as a litigator at Cravath, Swaine & Moore. In this episode, Alexandra explains her unconventional path to the tech space as a lawyer and why she believes technology is the social justice issue of our time. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal
undefined
Jul 28, 2021 • 38min

Navrina Singh: How AI is a multi-stakeholder problem and how do we solve for it? (Spoiler: it's all about trust.)

Navrina Singh is the Founder & CEO of Credo AI, whose mission is to empower organizations to deliver trustworthy and responsible AI through AI audit and governance products. Navrina serves on the Board of Directors of Mozilla and Stella Labs. Previously she served as the Product leader focused on AI at Microsoft where she was responsible for building and commercializing Enterprise Virtual Agents and spent 12+ years at Qualcomm. In this episode, Navrina shares several insights into responsible AI, including the 3 key elements to building trust in AI and the 4 components of the "Ethical AI flywheel." ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal
undefined
Jul 22, 2021 • 39min

Andrew Burt: How can lawyers be partners in the AI space?

Andrew Burt is a lawyer specializing in artificial intelligence, information security and data privacy. He co-founded bnh.ai and serves as chief legal officer of Immuta. His work has been profiled by magazines like Fast Company and his writing has appeared in Harvard Business Review, the New York Times and the Financial Times. In this episode, we explore the 'hype cycle' of AI where risks are overlooked and the appropriate role of a lawyer as a partner in this space. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal
undefined
Jun 30, 2021 • 40min

Anima Anandkumar: How can the intersection of academia and industry inform the next generation of AI?

Anima Anandkumar is an accomplished AI researcher in both academia and in industry. She is the Bren professor at Caltech CMS department and director of machine learning research at NVIDIA. Previously, Anima was a principal scientist at Amazon Web Service, where she enabled machine learning on the cloud infrastructure. Anima is the recipient of numerous awards and honors and has been featured in documentaries and articles by PBS, Wired, MIT Technology review, Forbes and many others. In this episode we learn about the “trinity of the deep learning revolution,” how the next generation of AI will bring the “mind & body” together, and the detrimental impacts fostered by a lack of diversity in tech. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal
undefined
Jun 24, 2021 • 58min

Vivienne Ming: How can we create AI that lifts society up rather than tearing it down?

Vivienne Ming is an internationally recognized neuroscientist and AI expert who has pushed the boundaries of AI in diverse areas including education, human resources, disability, and physical and mental health. In this episode, we ask Vivienne how we can ensure that society captures the benefits of AI technologies while mitigating their risks and avoiding harms to vulnerable populations. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode