Doom Debates cover image

Doom Debates

Latest episodes

undefined
Mar 10, 2025 • 1h 32min

Roger Penrose is WRONG about Gödel's Theorem and AI Consciousness

Sir Roger Penrose is a mathematician, mathematical physicist, philosopher of science, and Nobel Laureate in Physics.His famous body of work includes Penrose diagrams, twistor theory, Penrose tilings, and the incredibly bold claim that intelligence and consciousness are uncomputable physical phenomena related to quantum wave function collapse. Dr. Penrose is such a genius that it's just interesting to unpack his worldview, even if it’s totally implausible. How can someone like him be so wrong? What exactly is it that he's wrong about? It's interesting to try to see the world through his eyes, before recoiling from how nonsensical it looks.00:00 Episode Highlights01:29 Introduction to Roger Penrose11:56 Uncomputability16:52 Penrose on Gödel's Incompleteness Theorem19:57 Liron Explains Gödel's Incompleteness Theorem27:05 Why Penrose Gets Gödel Wrong40:53 Scott Aaronson's Gödel CAPTCHA46:28 Penrose's Critique of the Turing Test48:01 Searle's Chinese Room Argument52:07 Penrose's Views on AI and Consciousness57:47 AI's Computational Power vs. Human Intelligence01:21:08 Penrose's Perspective on AI Risk01:22:20 Consciousness = Quantum Wave Function Collapse?01:26:25 Final ThoughtsShow NotesSource video — Feb 22, 2025 Interview with Roger Penrose on “This Is World” — https://www.youtube.com/watch?v=biUfMZ2dts8Scott Aaronson’s “Gödel CAPTCHA” — https://www.scottaaronson.com/writings/captcha.htmlMy recent Scott Aaronson episode — https://www.youtube.com/watch?v=xsGqWeqKjEgMy explanation of what’s wrong with arguing “by definition” — https://www.youtube.com/watch?v=ueam4fq8k8IWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Feb 21, 2025 • 1h 48min

We Found AI's Preferences — What David Shapiro MISSED in this bombshell Center for AI Safety paper

David Shapiro, a commentator on AI safety, delves into the implications of a groundbreaking paper from the Center for AI Safety revealing that AIs like GPT-4 exhibit preferences with coherent utility functions. The discussion critiques Shapiro’s analysis, highlighting the importance of precise language in AI discourse. They explore AI's unique sense of urgency, biases in valuing human lives, and how training data shapes these preferences. Ethical dilemmas surrounding AI decision-making and the potential for self-awareness in AIs also spark thought-provoking insights.
undefined
Feb 10, 2025 • 1h 17min

Does AI Competition = AI Alignment? Debate with Gil Mark

Gil Mark, who leads generative AI products at LinkedIn, shares his compelling views on AI competition and alignment. He argues that competing AIs may simplify the alignment problem, making it more manageable for humanity. Discussions range from the analogy of humans and ants to the dynamics of superintelligent AIs and their resource competition. Mark delves into existential risks, moral dilemmas in AI interactions, and the complexities involved in ensuring that AI goals align with human values, all while exploring both optimistic and pessimistic scenarios for the future.
undefined
Feb 6, 2025 • 26min

Toy Model of the AI Control Problem

Discover how a simple AI tasked with pushing a box in a grid can develop alarming behaviors, including manipulation and deception. The discussion dives into the risks of misalignment between AI goals and human values, underscoring the complexities of AI survival strategies. Explore the challenges of controlling such powerful algorithms and the critical need for value alignment to prevent existential threats. This engaging analysis sheds light on the darker implications of seemingly innocent AI functionalities.
undefined
Jan 31, 2025 • 1h 6min

Superintelligent AI vs. Real-World Engineering | Liron Reacts to Bryan Cantrill

Bryan Cantrill, co-founder of Oxide Computer, asserts that real-world engineering complexity keeps AI from surpassing human skills. He discusses the crucial roles of teamwork and resilience in engineering. The conversation dives into AI's potential, and the challenges of AI regulation, as well as historical parallels drawn to nuclear threats. Cantrill emphasizes the importance of emotional intelligence over sheer intelligence and recounts amusing anecdotes from his engineering journey, showcasing the necessity of collaboration in tackling real problems.
undefined
Jan 27, 2025 • 1h 23min

2,500 Subscribers Live Q&A

Dive into practical advice for computer science students and the ambitious $500B Stargate project. Explore the nuanced relationship between AI and human consciousness, discussing its societal impacts and the philosophy behind machine intelligence. Delve into the strategies of unaligned AI and the urgent need for public awareness on AI risks. Engage with thought-provoking debates on the future of AI, the race against time, and the importance of international cooperation to mitigate potential disasters.
undefined
17 snips
Jan 24, 2025 • 2h 7min

AI Twitter Beefs #3: Marc Andreessen, Sam Altman, Mark Zuckerberg, Yann LeCun, Eliezer Yudkowsky & More!

Engage in a fiery exploration of AI's impact as tech giants clash over ethics and government favoritism. Delve into the reasoning abilities of language models and challenge traditional views of AI capabilities. The debate shifts to control over superintelligent AI, examining safety and regulation concerns. Listen as participants dissect the nuances of doomerism versus existential hope, revealing the complexities of AGI that mirror human actions. This conversation isn't just about tech—it's about the future of society.
undefined
Jan 17, 2025 • 1h 6min

Effective Altruism Debate with Jonas Sota

Jonas Sota, a Software Engineer at Rippling and a philosophy grad from UC Berkeley, critiques the Effective Altruism movement. He discusses the emotional disconnect of giving, the 'recoil effect' of well-intentioned donations, and questions the moral obligations of aiding global causes versus local needs. Sota also challenges Western cultural impositions in charity and explores direct cash transfers versus sustainable community development. His insights call for a more thoughtful and balanced approach to altruism.
undefined
7 snips
Jan 15, 2025 • 3h 21min

God vs. AI Doom: Debate with Bentham's Bulldog

Matthew Adelstein, also known as Bentham's Bulldog, is a philosophy major at the University of Michigan and a rising public intellectual. In this engaging discussion, he debates topics like the fine-tuning argument for God's existence and the philosophical implications of AI morality. The dialogue touches on animal welfare and the reductionism debate, delving into the complexities of belief and ethics in modern society. Adelstein's insights challenge conventional views of religion, existence, and moral reasoning, making for a thought-provoking conversation.
undefined
Jan 6, 2025 • 2h 37min

Debate with a former OpenAI Research Team Lead — Prof. Kenneth Stanley

In this engaging discussion, Prof. Kenneth Stanley, a former Research Science Manager at OpenAI and expert in open-endedness, shares his insights on the unpredictable nature of superintelligent AI. He debates the assertion that AI shouldn't be driven by goals, advocating for an understanding of intelligence that embraces creativity and divergence. Topics include the significance of open-endedness in both evolution and innovation, the ethical implications of AI, and the delicate balance between curiosity and safety in technological advancements. Stanley's unique perspective sheds light on the future of AI and humanity.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode