

Ethical Machines
Reid Blackman
I talk with the smartest people I can find working or researching anywhere near the intersection of emerging technologies and their ethical impacts.
From AI to social media to quantum computers and blockchain. From hallucinating chatbots to AI judges to who gets control over decentralized applications. If it’s coming down the tech pipeline (or it’s here already), we’ll pick it apart, figure out its implications, and break down what we should do about it.
From AI to social media to quantum computers and blockchain. From hallucinating chatbots to AI judges to who gets control over decentralized applications. If it’s coming down the tech pipeline (or it’s here already), we’ll pick it apart, figure out its implications, and break down what we should do about it.
Episodes
Mentioned books

Jun 6, 2023 • 44min
Hiring AI to Hire People
I doubt there’s a large corporation out there that hasn’t been pitched at least a dozen or so AI tools for HR. From vetting resumes to hiring to promoting to firing to predicting the likelihood someone will quit, there’s an AI tool for that.
But HR usually doesn’t know how to vet these systems. Nor does the standard procurement process. And businesses almost never have a process by which HR or procurement can hand these things over to internal AI ethical risk experts.
What’s more, the idea that we can have independent parties “audit” the algorithms for bias is a gross oversimplification of what needs to happen.
I talk about all this and more with Hilke Schellmann and Mona Sloane, Ph.D., both of whom know way more than I do about the ways AI stands between people and the jobs they need.
Hilke Schellmann is an Emmy-award-winning journalism professor at New York University and a freelance reporter holding artificial intelligence accountable. Her work has been published in The Wall Street Journal, The Guardian, The New York Times, and MIT Technology Review, among others. She is currently writing a book on artificial intelligence and the future of work for Hachette.
Mona Sloane, Ph.D. is a sociologist working on design and inequality, specifically in the context of AI design and policy. She is a Research Assistant Professor at NYU’s Tandon School of Engineering, Senior Research Scientist at the NYU Center for Responsible AI, a Fellow with NYU’s Institute for Public Knowledge (IPK) and The GovLab, and the Director of the *This Is Not A Drill* program on technology, inequality and the climate emergency at NYU’s Tisch School of the Arts. She is the principal investigator on multiple research projects on AI and society, and holds an affiliation as postdoctoral scholar with the Tübingen AI Center at the University of Tübingen in Germany where she leads a 3-year federally funded research project on the operationalization of ethics in German AI startups. Mona founded and runs the IPK Co-Opting AI series at NYU and currently serves as editor of the technology section at Public Books. She holds a Ph.D. in Sociology from the London School of Economics and Political Science. Follow her on Twitter @mona_sloane.

May 30, 2023 • 36min
Manipulative AI
You think targeted marketing can manipulate users and populations? Just wait.
Imagine chatbots powered by LLMs at scale. We'll see chatbots trained to be the best salesperson, negotiator, and manipulator you've ever encountered.
All this and more in my conversation with Louis Rosenberg.
Dr. Louis Rosenberg is a longtime technologist in the fields of augmented reality, virtual reality and artificial intelligence. His work began over thirty years ago in labs at Stanford and NASA. In 1992 he developed the first mixed reality system at Air Force Research Laboratory. In 1993 he founded the early VR company Immersion Corporation which he brought public on NASDAQ. In 2004 he founded Outland Research to develop AR technology that was acquired by Google in 2011. And in 2014 he founded Unanimous AI to amplify the intelligence of human groups using the biological principle of Swarm Intelligence. Rosenberg received his PhD from Stanford University, was a tenured professor at California State University, and has been awarded over 300 patents for VR, AR, and AI technologies. He's currently CEO of Unanimous AI, the Chief Scientist of the Responsible Metaverse Alliance, and the Global Technology Advisor to the XR Safety Initiative.

May 23, 2023 • 47min
How Do We Audit AI?
Back in the day, Ryan Carrier of ForHumanity told me to stop saying I do AI “audits.”
I replied, “Why? What’s the difference between an audit and an assessment?” And then he showed me the way.
He’ll show you the way, too, in this episode, which I found particularly edifying. Most helpful, for me, was the explanation for how auditors address areas in which there are ethical disagreements.
Ryan founded ForHumanity after a 25 year career in finance. His global business experience, risk management expertise and unique perspective on how to manage the risk led him to launch the non-profit entity, ForHumanity, personally. Ryan focused on Independent Audit of AI Systems as one means to mitigate the risk associated with artificial intelligence and began to build the business model associated a first-of-its-kind process for auditing corporate AIs, using a globally, open-source, crowd-sourced process to determine “best-practices”. Ryan serves as ForHumanity’s Executive Director and Chairman of the Board of Directors, in these roles he is responsible for the day-to-day function of ForHumanity and the overall process of Independent Audit. Prior to founding ForHumanity, Ryan owned and operated Nautical Capital, a quantitative hedge fund which employed artificial intelligence algorithms. He also was responsible for Macquarie’s Investor Products business in the late 2000’s. He worked at Standard & Poor’s in the Index business and for the International Finance Corporation’s Emerging Markets Database. Ryan has conducted business in over 55 countries and was a frequent speaker at industry conferences around the world. He is a graduate from the University of Michigan. Ryan became a Chartered Financial Analyst (CFA) in 2004.

May 9, 2023 • 37min
Benefits and Cost for Privacy
Join me and my go-to cybersecurity expert guy Matthew Rosenquist as we discuss the challenges and trade-offs in balancing privacy with safety and security.
Matthew Rosenquist is the Chief Information Security Officer (CISO) for Eclipz, the former Cybersecurity Strategist for Intel Corp, and benefits from over 30+ diverse years in the fields of cyber, physical, and information security. Matthew specializes in security strategy, measuring value, developing best-practices for cost-effective capabilities, and establishing organizations that deliver optimal levels of cybersecurity, privacy, governance, ethics, and safety. As a cybersecurity CISO and strategist, he identifies emerging risks and opportunities to help organizations balance threats, costs, and usability factors to achieve an optimal level of security. Matthew is very active in the industry. He is an experienced keynote speaker, collaborates with industry partners to tackle pressing problems and has published acclaimed articles, white papers, blogs, and videos on a wide range of cybersecurity topics. Matthew is a member of multiple advisory boards and consults on best-practices and emerging risks to academic, business, and government audiences across the globe.

Apr 25, 2023 • 50min
Transparency is Surveillance
Transparency for the sake of accountability is great, right? Well, not so fast.
I talk with C. Thi Nguyen's thought-provoking argument that the pursuit of transparency can be counterproductive.
C. Thi Nguyen is the Associate Professor of Philosophy at the University of Utah. His research concerns the ways in which our social structures and technologies can shape our values, agency, and rationality. He has written on games, trust, art, echo chambers, cultural appropriation, monuments, and group intimacy. His book is Games: Agency as Art. It was awarded the American Philosophical Associations 2021 Book Prize.

Apr 11, 2023 • 48min
Did You Say "Quantum" Computer?
What in the world are quantum computers, what can they do, and what are the potential ethical implications of this new powerful tech?
Brian and I discuss these issues and more. And don’t worry! No knowledge of physics required.
Brian Lenahan is the Founder & Chair of the Quantum Strategy Institute, author of “Quantum Boost: Using Quantum Computing to Supercharge Your Business”, writes extensively on quantum computing and artificial intelligence, and is a quantum strategist, working with companies to design unique quantum roadmaps. He is a university instructor and former executive with a Top 10 North American Bank.

Mar 28, 2023 • 48min
ChatGPT Does Not Understand Anything
It looks like ChatGPT understands what you’re asking. It looks like ChatGPT understands what it’s saying in reply.
It does not.
Alex and I discuss what understanding is, for both people and machines and what it would take for a machine to understand what it’s saying.
At the University of London, Alex Grzankowski is the Associate Director of the Institute of Philosophy and a Senior Lecturer at Birkbeck College. He researches and writes on issues in the philosophy of mind and the philosophy of language.

Mar 14, 2023 • 44min
Keeping Blockchain on the Rails
Plain talk, no jargon, explain blockchain to me in language a ten-year-old can understand. I’m joined by Ingrid Vasiliu-Feltes, an expert in risk, compliance, and innovation in healthcare and life sciences and beyond. Ingrid is my go-to person when I need an explanation for what’s happening in the world of tech.
Ingrid is a deep-tech, healthcare, and life sciences executive, who is highly dedicated to digital and ethics advocacy. She is a well-known futurist, globalist, digital strategist, passionate educator, and entrepreneurship ecosystem builder, known as a global thought leader for Blockchain, AI, Quantum Technology, Digital Twins, and Smart Cities. She serves on the Board of numerous organizations and held several leadership roles in the corporate, academic, and not-for-profit arenas throughout her career. She is the recipient of several awards and serves as an Expert Advisor to the EU Blockchain Observatory Forum, a Forbes Business Council member, and an Advisor to the UN Legal and Economic Empowerment Network. She continues to enjoy teaching Ethical Leadership, Innovation, and Digital Transformation at the WBAF Business School-Division of Entrepreneurship, and the University of Miami Business School, the Executive MBA Program.

Mar 14, 2023 • 46min
When Biased AI is Good
Everyone knows biased or discriminatory AI bad and we need to get rid of it, right? Well, not so fast.
I talk to David Danks, a professor of data science and philosophy at UCSD. He and his research team argue that we need to reconceive how we think about biased AI. In some cases, David thinks, they can be beneficial. Good policy - both corporate and regulatory - needs to take this into account.
It was a great discussion and seemed like the perfect way to kick off Ethical Machines. I hope you enjoy it. More importantly, I hope you get something out of it.
David Danks is a Professor of Data Science & Philosophy and affiliate faculty in Computer Science & Engineering at University of California, San Diego. His research interests range widely across philosophy, cognitive science, and machine learning, including their intersection. Danks has examined the ethical, psychological, and policy issues around AI and robotics in transportation, healthcare, privacy, and security. He has also done significant research in computational cognitive science and developed multiple novel causal discovery algorithms for complex types of observational and experimental data. Danks is the recipient of a James S. McDonnell Foundation Scholar Award, as well as an Andrew Carnegie Fellowship. He currently serves on multiple advisory boards, including the National AI Advisory Committee.

Mar 14, 2023 • 3min
What Drives this Podcast
Three claims drive my new podcast, Ethical Machines.
1. New technologies are coming our way every day: AI, blockchain, quantum computers, AR/VR and more.
2. These technologies and their applications will have massive ethical implications.
3. We need to understand these technologies so we can stop our ethical nightmares from being realized.