
Ethical Machines
I talk with the smartest people I can find working or researching anywhere near the intersection of emerging technologies and their ethical impacts.
From AI to social media to quantum computers and blockchain. From hallucinating chatbots to AI judges to who gets control over decentralized applications. If it’s coming down the tech pipeline (or it’s here already), we’ll pick it apart, figure out its implications, and break down what we should do about it.
Latest episodes

Oct 26, 2023 • 48min
Creating Responsible AI in the Face of Our Ignorance
We want to create AI that makes accurate predictions. We want that not only because we want our products to work, but also because reliable products are, all else equal, ethically safe products.
But we can’t always know whether our AI is accurate. Our ignorance leaves us with a question: which of the various AI models that we’ve developed is the right one for this particular use case?
In some circumstances, we might decide that using AI isn’t the right call. We just don’t know enough. In other instances, we may know enough, but we also have to choose our model in light of the ethical values we’re trying to achieve.
Julia and I talk about this and a lot of other (ethical) problems that beset AI practitioners on the ground, and what can and cannot be done about it.
Dr. Julia Stoyanovich is Associate Professor of Computer Science & Engineering and of Data Science, and Director of the Center for Responsible AI at NYU. Her goal is to make “responsible AI” synonymous with “AI”. Julia has co-authored over 100 academic publications, and has written for the New York Times, the Wall Street Journal and Le Monde. She engages in technology policy, has been teaching responsible AI to students, practitioners and the public, and has co-authored comic books on this topic. She received her Ph.D. in Computer Science from Columbia University.

Oct 12, 2023 • 47min
Turing Test is not Intelligent (and what it would take for AI to understand)
If I look inside your head when you’re talking, I’ll see various neurons lighting up, probably in the prefrontal cortex as you engage in the reasoning that’s necessary to say whatever it is you’re saying. But if I opened your head and instead found a record playing and no brain, I’d realize I was dealing with a puppet, not a person with a brain/intellect.
In both cases you’re saying the same things (let’s suppose). But because of what’s going on in the head, or “under the hood,” it’s clear there’s intelligence in the first case and not in the second.
Does an LLM (large language models like GPT or Bard) have intelligence. Well, to know that we need to look under the hood, as Lisa Titus argues. It’s not impossible that AI could be intelligent, she says, but judging by what’s going on under the hood at the moment, it’s not.
Fascinating discussion about the nature of intelligence, why we attribute it to each other (mostly), and why we shouldn’t attribute it to AI.
Lisa Titus (née Lisa Miracchi) is a tenured Associate Professor of Philosophy at the University of Denver.
Previously, she was a tenured Associate Professor of Philosophy at the University of Pennsylvania, where she was also a General Robotics, Automation, Sensing, and Perception (GRASP) Lab affiliate and a MindCORE affiliate.
She works on issues regarding mind and intelligence. What makes intelligent systems different from other kinds of systems? What kinds of explanations of intelligent systems are possible, or most important? What are appropriate conceptions of real-world intelligent capacities like those for agency, knowledge, and rationality? How can conceptual clarity on these issues advance cognitive science and aid in the effective and ethical development and application of AI and robotic systems? Her work draws together diverse literatures in the cognitive sciences, AI, robotics, epistemology, ethics, law, and policy to systematically address these questions.

Sep 29, 2023 • 45min
Innovation Hype and Why We Should Wait on AI Regulation
Innovation is great…but hype is bad. Not only has all this talk of innovation not increased innovation, but it also creates a bad environment in which leaders can make reasoned judgments about where to devote resources. So says Lee Vinsel in my latest podcast episode.
ALSO: We want proactive regulations before the sh!t hits the fan, right? Not so fast, says Lee. Proactive regulations presuppose we’re good at predicting how technologies will be applied, and we have a terrible track record on that front. Perhaps reactive regs are appropriate (and we need to focus on making a more agile government).
Super interesting conversation that will push you to think differently about innovation and what appropriate regulation looks like.
Lee Vinsel is an Associate Professor of Science, Technology, and Society at Virginia Tech and host of Peoples & Things, a podcast about human life with technology. His work examines the social dimensions of technology with particular focus on the relationship between government and technological change. He is the author of Moving Violations: Automobiles, Experts, and Regulations in the United States and, with Andrew L. Russell, The Innovation Delusion: How Our Obsession with the New Has Disrupted the Work That Matters Most.

Aug 29, 2023 • 40min
Surprising Digital Twins Opportunities and Risks
Digital twins: they're not just a sci-fi doppelganger—they're a game-changing technology that can simulate real-world scenarios in real-time. My latest chat with Ingrid Vasiliu-Feltes opened my eyes to the Pandora's Box of ethics we're cracking open.
It's a moral labyrinth. I went from "Why should I care?" to "Oh, I really SHOULD care," and trust me, you will too.
Ingrid is a deep-tech, healthcare, and life sciences executive, who is highly dedicated to digital and ethics advocacy. She is a well-known futurist, globalist, digital strategist, passionate educator, and entrepreneurship ecosystem builder, known as a global thought leader for Blockchain, AI, Quantum Technology, Digital Twins, and Smart Cities. She serves on the Board of numerous organizations and held several leadership roles in the corporate, academic, and not-for-profit arenas throughout her career. She is the recipient of several awards and serves as an Expert Advisor to the EU Blockchain Observatory Forum, a Forbes Business Council member, and an Advisor to the UN Legal and Economic Empowerment Network. She continues to enjoy teaching Ethical Leadership, Innovation, and Digital Transformation at the WBAF Business School-Division of Entrepreneurship, and the University of Miami Business School, the Executive MBA Program.

Aug 10, 2023 • 48min
How Do We Distribute Responsibility When AI Goes Wrong?
One company builds the model. Another tweaks the model. Who’s responsible when things go sideways?
David Danks is a Professor of Data Science & Philosophy and affiliate faculty in Computer Science & Engineering at University of California, San Diego. His research interests range widely across philosophy, cognitive science, and machine learning, including their intersection. Danks has examined the ethical, psychological, and policy issues around AI and robotics in transportation, healthcare, privacy, and security. He has also done significant research in computational cognitive science and developed multiple novel causal discovery algorithms for complex types of observational and experimental data. Danks is the recipient of a James S. McDonnell Foundation Scholar Award, as well as an Andrew Carnegie Fellowship. He currently serves on multiple advisory boards, including the National AI Advisory Committee.

Jul 27, 2023 • 47min
Should We Care About Data Privacy?
You might think it's outrageous that companies collect data about you and use it in various ways to drive profits. The business model of the "attention" economy is often objected to on just these grounds.
On the other hand, does it really matter if data about you is collected and no person ever looks at that data? Is that really an invasion of your privacy?
Carissa and I discuss all this and more. I push the skeptical line, trying on the position that it doesn't really matter all that much. Carissa has powerful arguments against me.
This conversation goes way deeper than 'privacy good/data collection bad' statements we see all the time. I hope you enjoy!
Carissa Véliz is an Associate Professor in Philosophy at the Institute for Ethics in AI, and a Fellow at Hertford College at the University of Oxford. She is the recipient of the 2021 Herbert A. Simon Award for Outstanding Research in Computing and Philosophy. She is the author of the highly-acclaimed Privacy Is Power (an Economist book of the year, 2020) and the editor of the Oxford Handbook of Digital Ethics. She advises private and public organisations around the world on privacy and the ethics of AI.

Jul 20, 2023 • 35min
Does Generative AI Undermine Art Schools and Creativity?
Job automation, human creativity, and generative AI in higher education, all wrapped into one. Questions include:
Will there be fewer jobs for designers because gen AI will create marketing materials, websites, etc.?
Will cameras go the way of the dark room?
What’s the role of Gen AI in fine art?
What do art teachers in higher Ed do about the new tool?
As an artist and faculty at the School of Visual Arts, Eric is in a rare position to have insight into all of this. And he’s been my closest friend for the last 25 years :)
Eric Corriel is a multidisciplinary artist living in New York City. After graduating from Cornell University with a Bachelor of Arts in Philosophy, he went on to get a Diplôme National d’Arts Plastiques from the École Régionale Supérieure d’Expression Plastique in Tourcoing, France. Currently living in New York City, Eric takes the urban landscape as a medium in which to create site-specific installations. He also teaches Artist as Activist at School of Visual Arts in New York City, where he is also Digital Strategy Director.
Eric is a two-time New York State Council on the Arts grant recipient, two-time Webby Award winner, and New York Foundation of the Arts Fellow

Jun 29, 2023 • 52min
Algorithmic Abolitionism
Humans are bad at making predictions, especially in a criminal justice setting. And it looks like AI can do better both from an accuracy and bias standpoint. So let’s replace human judges with AI. So argues professor of law Peter Salib in our fascinating discussion.
Peter Salib is an Assistant Professor of Law at the University of Houston Law Center and Associated Faculty in the Hobby School of Public Affairs. He writes and teaches about law and artificial intelligence. His scholarly work has been published in, among others, The University of Chicago Law Review, Northwestern University Law Review, Texas Law Review, and the Duke Law Journal Online. Before joining the University of Houston Law Center, Peter was a Climenko Fellow at Harvard Law School and a judicial clerk for the Honorable Frank H. Easterbrook. Before that, he practiced law at Sidley Austin, LLP, specializing in appellate litigation.

Jun 20, 2023 • 47min
Choosing Who Should Benefit and Who Should Suffer with AI
I talk a lot about bias, black boxes, and privacy, but perhaps my focus is too narrow. In this conversation, Aimee and I discuss what she calls “sustainable AI.” We focus on the environmental impacts of AI, the ethical impacts of those environmental impacts, and who is paying the social cost of those who benefit from AI.
Aimee van Wynsberghe is the Alexander von Humboldt Professor for Applied Ethics of Artificial Intelligence at the University of Bonn in Germany. Aimee is director of the Institute for Science and Ethics and the Bonn Sustainable AI lab. She is co-director of the Foundation for Responsible Robotics and a member of the European Commission's High-Level Expert Group on AI. She is a founding editor for the international peer-reviewed journal AI & Ethics and member of the World Economic Forum's Global Futures Council on Artificial Intelligence and Humanity. She is author of the book Healthcare Robots: Ethics, Design, and Implementation and is regularly interviewed by media outlets. In each of her roles, Aimee works to uncover the ethical risks associated with emerging robotics and AI. Aimee’s current research, funded by the Alexander von Humboldt Foundation, brings attention to the sustainability of AI by studying the hidden environmental costs of developing and using AI.

Jun 14, 2023 • 45min
In Defense of Black Box AI
Is it better to have a high performing black box AI or a lower performing explainable AI?
Are the explanations for how AI works actually true or really a distortion of what's going on inside the model?
How should we think about and operationalize the tradeoffs between running explainability algos and the high cost and carbon footprint of running them?
These questions and more with Kristof. A really fascinating discussion that reveals the complexity behind simplistic calls for explainable AI.
Kristof is currently leading Responsible AI at JPMorgan Chase. Previously, he helped build out and subsequently led the Responsible AI effort at PayPal. He held various other quantitative roles at major banks and has a bachelors in mathematics and a masters in applied mathematics from the Budapest University of Technology and Economics.