
Ethical Machines
I talk with the smartest people I can find working or researching anywhere near the intersection of emerging technologies and their ethical impacts.
From AI to social media to quantum computers and blockchain. From hallucinating chatbots to AI judges to who gets control over decentralized applications. If it’s coming down the tech pipeline (or it’s here already), we’ll pick it apart, figure out its implications, and break down what we should do about it.
Latest episodes

7 snips
Mar 7, 2024 • 47min
We Need AI Regulations
Exploring the need for AI regulation, discussing AI ethics for leaders, regulating social media platforms to address discrimination and content influence on elections, challenges in regulating advanced AI like self-driving cars, increased scrutiny on high-risk AI applications, addressing ethical risks in AI technologies, and navigating reputational risks in tech companies.

Feb 22, 2024 • 32min
AI Needs Historians
How can we solve AI’s problems if we don’t understand where they came from?
Jason Steinhauer is a public historian and bestselling author of History, Disrupted: How Social Media & the World Wide Web Have Changed the Past. He is the founder of the History Communication Institute, Global Fellow at The Wilson Center, Senior Fellow at the Foreign Policy Research Institute, an adjunct professor at the Maxwell School for Citizenship & Public Affairs, a contributor to TIME, CNN and DEVEX; a past editorial board member of The Washington Post "Made By History" section; and a Presidential Counselor of the National WWII Museum. He previously worked for seven years at the U.S. Library of Congress.

Feb 8, 2024 • 44min
AI in Warfare
How much control should AI have when your enemy has AI too?
As Jeremy Kofsky, a member of the Marine Corps explains, AI will be everywhere in military operations. That’s a bit frightening, given the speed at which AI operates and given the stakes involved. My discussion with Jeremy covers a range of issues, including how and where a human should be in control, what needs to be done given that the enemy can use AI as well, and just how much responsibility lies not with military policy, but with individual commanders.
Jeremy Kofsky is a 20-year Marine with small-unit operational experience on five continents. Over his 12 deployments, he has conducted combat operations, provided tactical to strategic level intelligence, and seen the growth of Artificial Intelligence in the military sphere. He conducts Artificial Intelligence work for the Key Terrain Cyber Institute as a 2nd Lt J.P. Blecksmith Research Fellow. Recently he completed the Brute Krulak Scholar Program as the first ever enlisted member to complete the year-long process.

Jan 31, 2024 • 38min
Sexy Cyber Threats of GenAI: How to Avoid Exposing Yourself
We're all familiar with cybersecurity threats. Stories of companies being hacked and data and secrets being stolen abound. Now we have generative AI to throw fuel on the fire.
I don't know much about cybersecurity, but Matthew does. In this conversation, he provides some fun and scary stories about how hackers have operated in the past, how they can leverage genAI to get access to things they shouldn't have access to, and what cybersecurity professionals are doing to slow them down.
Matthew Rosenquist is the Chief Information Security Officer (CISO) for Eclipz, the former Cybersecurity Strategist for Intel Corp, and benefits from over 30+ diverse years in the fields of cyber, physical, and information security. Matthew specializes in security strategy, measuring value, developing best-practices for cost-effective capabilities, and establishing organizations that deliver optimal levels of cybersecurity, privacy, governance, ethics, and safety. As a cybersecurity CISO and strategist, he identifies emerging risks and opportunities to help organizations balance threats, costs, and usability factors to achieve an optimal level of security. Matthew is very active in the industry. He is an experienced keynote speaker, collaborates with industry partners to tackle pressing problems and has published acclaimed articles, white papers, blogs, and videos on a wide range of cybersecurity topics. Matthew is a member of multiple advisory boards and consults on best-practices and emerging risks to academic, business, and government audiences across the globe.

Jan 25, 2024 • 46min
Can AI in the Criminal Justice System Avoid the Minority Report?
When you think about AI in the criminal justice system, you probably think either about biased AI or mass surveillance. This episode focuses on the latter and takes up the following challenge: can we integrate AI into the criminal justice system without realizing the nightmarish picture provided by the film “Minority Report.”
Explaining what that vision is and why it’s important is the goal of my guest, professor of law, and my good friend, Guha Krishnamurthi.
Guha is an Associate Professor of Law at the University of Maryland Francis King Carey School of Law. His research interests are in criminal law, constitutional law, and antidiscrimination law. Prior to academia, Guha clerked on the California Supreme Court, U.S. District Court for the Northern District of Illinois, and the U.S. Court of Appeals for the Seventh Circuit. Between those clerkships he worked in private practice for five years in California.

Jan 11, 2024 • 41min
Showing Technologists the Power They Have
My conversation with Chris covered everything from government to corporate surveillance to why we should care about data privacy to the power that technologists have and how they should wield it responsibly. Always great to chat with Chris (we’ve been talking about this for 5 years now) and nice to bring it to a larger audience.
Chris Wiggins is an associate professor in the Department of Applied Physics and Applied Mathematics and the chief data scientist at The New York Times. He is a member of Columbia’s Institute for Data Sciences and Engineering, a founding member of the University’s Center for Computational Biology and Bioinformatics, and the co-founder of hackNY, a New York City-based initiative seeking “to create and empower a community of student-technologists.”

9 snips
Dec 27, 2023 • 46min
Perils of Principles in AI Ethics
The podcast discusses the limitations of applying ethical principles in AI ethics and proposes a more practical and inclusive approach. It explores the concept of participatory deliberative conservatism as an alternative framework for AI ethics, emphasizing stakeholder involvement and identifying underlying values. The importance of a participatory approach in decision-making for AI systems in healthcare is highlighted, along with the role of volunteers and ethicists in improving quality improvement in hospitals.

Dec 13, 2023 • 54min
Why Copyright Challenges to AI Learning Will Fail and the Ethical Reasons Why They Shouldn’t
Well, I didn’t see this coming. Talking about legal and philosophical conceptions of copyright turns out to be intellectually fascinating and challenging. It involves not only concepts about property and theft, but also about personhood and invasiveness. Could it be that training AI with author/artist work violates their self?
I talked with Darren Hick about all this, who wrote a few books on the topic. I definitely didn’t think he was going to bring up Hegel.
Darren Hudson Hick is an assistant professor of philosophy at Furman University, specializing in philosophical issues in copyright, forgery, authorship, and related areas. He is the author of Artistic License: The Philosophical Problems of Copyright and Appropriation (Chicago, 2017) and Introducing Aesthetics and the Philosophy of Art (Bloomsbury, 2023), and the co-editor of The Aesthetics and Ethics of Copying (Bloomsbury, 2016). Dr. Hick gained significant media attention as one of the first professors to catch a student using ChatGPT to plagiarize an assignment.

Dec 5, 2023 • 46min
Tech Forward Conservatism vs Nature Leaning Liberalism, and Everything in Between
Are you on the political left or the political right? Ben Steyn wants to ask you the same question with regards to nature and technology. Do you lean tech or do you lean nature?
For instance, what do you think about growing human babies outside of a womb (aka ectogenesis)? Are you inclined to find it an affront to nature and you want politicians to make it illegal? Are you inclined to find it a tech wonder and you want to make sure elected officials don’t ban such a thing?
Ben claims that nature vs. tech leanings don’t map on nicely to the political left vs right distinction. We need a new axis by which we evaluate our politicians.
Really thought-provoking conversation - enjoy!

Nov 2, 2023 • 55min
Morality of Israeli-Hamas War
Before I did AI ethics, I was a philosophy professor, specializing in ethics. One of my senior colleagues in the field was David Enoch, also an ethicist and philosopher of law. David is also Israeli and a long-time supporter of a two-state solution. In fact, he went to military jail for refusing to serve in Gaza for ethical reasons.
Given David’s rare, if not unique, combination of expertise and experience, I wanted to have a conversation with him about the Israeli-Hamas war. In the face of the brutal Hamas attacks of October 7, what is it ethically permissible for Israel to do?
David rejects both extremes. It’s not the case that Israel should be pacifist. That would be for Israel to default on its obligations to safeguard its citizens. Nor should Israel bomb Gaza and its people out of existence; that would be to engage in genocide.
If you’re looking for an “Israel is the best and does nothing wrong” conversation, you won’t find it here. If you’re looking for “Israel is the worst and should drop their weapons and go home,” you won’t find that here, either. It’s a complex situation. David and I do our best to navigate it as best we can.
David Enoch studies law and philosophy in Tel Aviv University, and then clerked for Justice Beinisch at the Israeli Supreme Court. He got a PhD in philosophy from NYU in 2003, and has been a professor of law and philosophy at the Hebrew University ever since. This year he started as the Professor of the Philosophy of Law at Oxford. He does mainly moral, political, and legal philosophy.