

In AI We Trust?
Miriam Vogel
In AI We Trust? is a podcast hosted by Miriam Vogel of EqualAI where we survey the global landscape for inspiration and best practices in the AI space with an eye toward responsible, trustworthy AI. Each episode aims to answer a ‘big question’ as we speak to leaders in government, tech and civil society.
Episodes
Mentioned books

Oct 6, 2021 • 33min
Rep. Yvette Clarke: Why is AI regulation necessary during this time of racial reckoning?
Find out on this week's episode with special guest Congresswoman Yvette Clarke (NY-9th) why she makes AI a top priority in her work to protect vulnerable populations.
-----
To learn more about EqualAI, visit our website: https://www.equalai.org/
You can also follow us on Twitter: @ai_equal

Sep 28, 2021 • 49min
Elham Tabassi of NIST: Who ensures the U.S. has strong metrics, tools, & standards for responsible AI?
Observers have been skeptical about the ability of the US to lead in AI and establish the necessary framework to ensure its safe and effective development. NIST – the National Institute of Standards and Technology – is responding to that call. In this episode, we speak with Elham Tabassi who is leading NIST's work to support safe and effective Artificial Intelligence. Elham the Chief of Staff in the Information Technology Laboratory (ITL) and serves on the National AI Research Resource Task Force, announced by the White House and the National Science Foundation (NSF) last June. Learn what makes NIST's 'secret sauce' for impactful work (spoiler: it involves you) and participate in the discussion through upcoming workshops and listening sessions: https://www.nist.gov/itl/ai-risk-management-framework/ai-rmf-development-request-information
-----
To learn more about EqualAI, visit our website: https://www.equalai.org/
You can also follow us on Twitter: @ai_equal

Sep 23, 2021 • 43min
Taka Ariga and Stephen Sanford: What is the U.S. GAO's AI Framework?
Taka Ariga is the first Chief Data Scientist and Director of the Innovation Lab at the U.S. Government Accountability Office (GAO). Stephen Sanford is the Managing Director in GAO’s Strategic Planning and External Liaison team. Taka and Stephen are the authors of the GAO's recently released AI Framework, one of the first resources provided by the U.S. government to help identify best practices and the principles to deploy, monitor and evaluate AI responsibly. In this episode, we ask the AI Framework authors why they took on this initiative and lessons learned that are broadly applicable across industry.
-----
To learn more about EqualAI, visit our website: https://www.equalai.org/
You can also follow us on Twitter: @ai_equal

Sep 16, 2021 • 41min
Vilas Dhar: How can civil society shape a positive, human-centric future for AI?
Vilas Dhar is a technologist, lawyer, and human rights advocate championing a new social compact for the digital age. As President and Trustee of the Patrick J. McGovern Foundation, he is a global leader in advancing artificial intelligence and data solutions to create a thriving, equitable, and sustainable future for all. In this episode we ask Vilas how he arrived at the intersection of AI and philanthropy, and how he thinks philanthropists and civil society can shape a more inclusive and societally beneficial future for AI.
-----
To learn more about EqualAI, visit our website: https://www.equalai.org/
You can also follow us on Twitter: @ai_equal

Aug 24, 2021 • 36min
Steve Mills: How can companies walk the walk on responsible AI?
Steve Mills is a Partner at Boston Consulting Group (BCG), where he serves as Chief AI Ethics Officer and the Global Lead for Artificial Intelligence in the Public Sector. He has worked with dozens of leading companies and government agencies to improve their AI practices, and in this episode he shares some of the key lessons he has learned about how organizations can translate their ethical AI commitments into practical, meaningful actions.
-----
To learn more about EqualAI, visit our website: https://www.equalai.org/
You can also follow us on Twitter: @ai_equal

Aug 18, 2021 • 43min
Julia Stoyanovich: Can AI systems operate fairly within complex, diverse societies?
Julia Stoyanovich is an Assistant Professor in the Department of Computer Science and Engineering at NYU’s Tandon School of Engineering, where she is also the Director of the Center for Responsible AI. Her research focuses on responsible data management and analysis and on practical tools for operationalizing fairness, diversity, transparency, and data protection in all stages of data acquisition and processing. In addition to conducting field-leading research and teaching, Professor Stoyanovich has written several comics aimed at communicating complex AI issues to diverse audiences.
-----
To learn more about EqualAI, visit our website: https://www.equalai.org/
You can also follow us on Twitter: @ai_equal

Aug 10, 2021 • 37min
Oren Etzioni: Why is the term "machine learning" a misnomer?
Dr. Oren Etzioni is Chief Executive Officer at AI2, the Allen Institute for AI, a non-profit that offers foundational research, applied research and user-facing products. He is Professor Emeritus at University of Washington and a Venture Partner at the Madrona Venture Group. He has won numerous awards and founded several companies, has written over 100 technical papers, and provides commentary on AI for The New York Times, Wired, and Nature. In this episode, Oren explains why “machine learning” is a misnomer and some of the exciting AI innovations he is supporting that will result in greater inclusivity.
-----
To learn more about EqualAI, visit our website: https://www.equalai.org/
You can also follow us on Twitter: @ai_equal

Aug 5, 2021 • 26min
Alexandra Givens: What makes tech to social justice issue of our time?
Alexandra Reeve Givens is the President & CEO of the Center for Democracy and Technology (CDT). She is an advocate for using technology to increase equality, amplify voices, and promote human rights. Previously, Alexandra served as the founding Executive Director of the Institute for Technology Law & Policy at Georgetown Law, served as Chief Counsel for IP and Antitrust on the Senate Judiciary Committee and began her career as a litigator at Cravath, Swaine & Moore. In this episode, Alexandra explains her unconventional path to the tech space as a lawyer and why she believes technology is the social justice issue of our time.
-----
To learn more about EqualAI, visit our website: https://www.equalai.org/
You can also follow us on Twitter: @ai_equal

Jul 28, 2021 • 38min
Navrina Singh: How AI is a multi-stakeholder problem and how do we solve for it? (Spoiler: it's all about trust.)
Navrina Singh is the Founder & CEO of Credo AI, whose mission is to empower organizations to deliver trustworthy and responsible AI through AI audit and governance products. Navrina serves on the Board of Directors of Mozilla and Stella Labs. Previously she served as the Product leader focused on AI at Microsoft where she was responsible for building and commercializing Enterprise Virtual Agents and spent 12+ years at Qualcomm. In this episode, Navrina shares several insights into responsible AI, including the 3 key elements to building trust in AI and the 4 components of the "Ethical AI flywheel."
-----
To learn more about EqualAI, visit our website: https://www.equalai.org/
You can also follow us on Twitter: @ai_equal

Jul 22, 2021 • 39min
Andrew Burt: How can lawyers be partners in the AI space?
Andrew Burt is a lawyer specializing in artificial intelligence, information security and data privacy. He co-founded bnh.ai and serves as chief legal officer of Immuta. His work has been profiled by magazines like Fast Company and his writing has appeared in Harvard Business Review, the New York Times and the Financial Times. In this episode, we explore the 'hype cycle' of AI where risks are overlooked and the appropriate role of a lawyer as a partner in this space.
-----
To learn more about EqualAI, visit our website: https://www.equalai.org/
You can also follow us on Twitter: @ai_equal