Carl Feynman, an AI engineer with a rich background in philosophy and computer science, discusses the looming threats of superintelligent AI. He shares insights from his four-decade career, highlighting the chilling possibility of human extinction linked to AI development. The conversation dives into the history of AI doom arguments, the challenges of aligning AI with human values, and potential doom scenarios. Feynman also explores the existential questions surrounding AI’s future role in society and the moral implications of technological advancements.