The Valmy

Peter Hartree
undefined
Aug 13, 2025 • 2h 26min

Debate with Vitalik Buterin — Will “d/acc” Protect Humanity from Superintelligent AI?

Podcast: Doom Debates Episode: Debate with Vitalik Buterin — Will “d/acc” Protect Humanity from Superintelligent AI?Release date: 2025-08-12Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationVitalik Buterin is the founder of Ethereum, the world's second-largest cryptocurrency by market cap, currently valued at around $500 billion. But beyond revolutionizing blockchain technology, Vitalik has become one of the most thoughtful voices on AI safety and existential risk.He's donated over $665 million to pandemic prevention and other causes, and has a 12% P(Doom) – putting him squarely in what I consider the "sane zone" for AI risk assessment. What makes Vitalik particularly interesting is that he's both a hardcore techno-optimist who built one of the most successful decentralized systems ever created, and someone willing to seriously consider AI regulation and coordination mechanisms.Vitalik coined the term "d/acc" – defensive, decentralized, democratic, differential acceleration – as a middle path between uncritical AI acceleration and total pause scenarios. He argues we need to make the world more like Switzerland (defensible, decentralized) and less like the Eurasian steppes (vulnerable to conquest).We dive deep into the tractability of AI alignment, whether current approaches like DAC can actually work when superintelligence arrives, and why he thinks a pluralistic world of competing AIs might be safer than a single aligned superintelligence. We also explore his vision for human-AI merger through brain-computer interfaces and uploading.The crux of our disagreement is that I think we're heading for a "plants vs. animals" scenario where AI will simply operate on timescales we can't match, while Vitalik believes we can maintain agency through the right combination of defensive technologies and institutional design.Finally, we tackle the discourse itself – I ask Vitalik to debunk the common ad hominem attacks against AI doomers, from "it's just a fringe position" to "no real builders believe in doom." His responses carry weight given his credibility as both a successful entrepreneur and someone who's maintained intellectual honesty throughout his career.Timestamps* 00:00:00 - Cold Open* 00:00:37 - Introducing Vitalik Buterin* 00:02:14 - Vitalik's altruism* 00:04:36 - Rationalist community influence* 00:06:30 - Opinion of Eliezer Yudkowsky and MIRI* 00:09:00 - What’s Your P(Doom)™* 00:24:42 - AI timelines* 00:31:33 - AI consciousness* 00:35:01 - Headroom above human intelligence* 00:48:56 - Techno optimism discussion* 00:58:38 - e/acc: Vibes-based ideology without deep arguments* 01:02:49 - d/acc: Defensive, decentralized, democratic acceleration* 01:11:37 - How plausible is d/acc?* 01:20:53 - Why libertarian acceleration can paradoxically break decentralization* 01:25:49 - Can we merge with AIs?* 01:35:10 - Military AI concerns: How war accelerates dangerous development* 01:42:26 - The intractability question* 01:51:10 - Anthropic and tractability-washing the AI alignment problem* 02:00:05 - The state of AI x-risk discourse* 02:05:14 - Debunking ad hominem attacks against doomers* 02:23:41 - Liron’s outroLinksVitalik’s website: https://vitalik.eth.limoVitalik’s Twitter: https://x.com/vitalikbuterinEliezer Yudkowsky’s explanation of p-Zombies: https://www.lesswrong.com/posts/fdEWWr8St59bXLbQr/zombies-zombies—Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
undefined
Jul 8, 2025 • 31min

Trump’s tech bros: The enigma of Peter Thiel

Join Tabby Kinder, FT's West Coast financial editor, and Gillian Tett, FT columnist, as they delve into the complex world of Peter Thiel. Kinder unpacks Thiel’s significant investments and his unique position in Silicon Valley, while Tett explores his controversial political philosophy and ties to Donald Trump. They discuss Thiel's influence over technology and politics, the intersection of libertarian ideals with his venture capitalism, and how his disruptive ideas continue to shape American political discourse.
undefined
Jun 5, 2025 • 39min

Ep 114: Flying Cars Are About to Change the World — Joby CEO JoeBen Bevirt

Podcast: Joe Lonsdale: American Optimist Episode: Ep 114: Flying Cars Are About to Change the World — Joby CEO JoeBen BevirtRelease date: 2025-06-04Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationJoeBen Bevirt has spent two decades building electric vertical take-off and landing (eVTOL) aircraft, and now he's on the cusp of commercial approval and rollout. Will flying cars be as transformational as the automobile? How will air taxis impact our cities and the way we live? And how did JoeBen achieve this feat of ingenuity?This week we're joined by the Co-Founder and CEO of Joby Aviation, an American aviation company pioneering eVTOL aircraft for air taxi service. All-electric, virtually silent, and traveling up to 200mph with a pilot and four passengers, Joby is opening new possibilities in the skies above — starting at the price of an Uber Black. The implications for productivity and quality of life are massive, saving the average person an hour or two a day sitting in traffic and unlocking new swaths of land for development.I'm proud that 8VC co-led Joby's first investment round about a decade ago, when many others, even flying enthusiasts, thought it was a pipedream. Since then, Joby has single-handedly shaped an entire new industry, from engineering breakthroughs to regulatory pathways, ensuring that American aviation stays ahead of China. Joby expects its first passenger rides in Dubai within a year and is working closely with the Trump administration as it nears the final stages of FAA approval. Inspired by SpaceX, Joby is vertically integrated and plans to aggressively ramp manufacturing here in the U.S., backed by a $500 million investment from Toyota (bringing Toyota's total investment near $900 million). While we await the first passenger flights, Joby is also building out its infrastructure nationwide — and they're looking for real estate and partners! You can contact JoeBen and the team here: info@jobyaviation.com00:00 Episode Intro 01:38 Flying cars are here 04:00 JoeBen's journey 05:48 Battery progress & hydrogen breakthroughs 08:50 Air taxi for the price of Uber Black 12:35 When will commercial flights start? 20:30 Why Joby is the industry leader 24:20 Why China is copying Joby 28:00 How air taxis will change your life 32:10 How Joby will transform real estate 35:45 Solving intractable problems This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit blog.joelonsdale.com
undefined
Apr 28, 2025 • 1h 46min

Richard Ngo - A State-Space of Positive Posthuman Futures [Worthy Successor, Episode 8]

Podcast: The Trajectory Episode: Richard Ngo - A State-Space of Positive Posthuman Futures [Worthy Successor, Episode 8]Release date: 2025-04-25Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationThis is an interview with Richard Ngo, AGI researcher and thinker - with extensive stints at both OpenAI and DeepMind.This is an additional installment of our "Worthy Successor" series - where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity.This episode referred to the following other essays and resources:-- A Worthy Successor - The Purpose of AGI: https://danfaggella.com/worthy-- Richard's exploratory fiction writing - https://narrativeark.xyz/Watch this episode on The Trajectory YouTube channel: https://youtu.be/UQpds4PXMjQ See the full article from this episode: https://danfaggella.com/ngo1...There three main questions we cover here on the Trajectory:1. Who are the power players in AGI and what are their incentives?2. What kind of posthuman future are we moving towards, or should we be moving towards?3. What should we do about it?If this sounds like it's up your alley, then be sure to stick around and connect:-- Blog: danfaggella.com/trajectory -- X: x.com/danfaggella -- LinkedIn: linkedin.com/in/danfaggella -- Newsletter: bit.ly/TrajectoryTw-- Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954
undefined
Mar 11, 2025 • 1h 14min

AI, data centers, and power economics, with Azeem Azhar

Podcast: Complex Systems with Patrick McKenzie (patio11) Episode: AI, data centers, and power economics, with Azeem AzharRelease date: 2025-02-27Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationPatrick McKenzie (patio11) is joined by Azeem Azhar, writer of the Exponential View newsletter, to discuss the massive data center buildout powering AI and its implications for our energy infrastructure. The conversation covers the physical limitations of modern datacenters, the challenges of electricity generation, the societal ripples from historical largescale infrastructure investments like railways and telecommunications, and the future of energy including solar, nuclear and geothermal power. Through their discussion, Patrick and Azeem explain why our mental models for both computing and energy systems need to be updated.–Full transcript available here: www.complexsystemspodcast.com/ai-llm-data-center-power-economics/–Sponsors:  Safebase | CheckReady to save time and close deals faster? Inbound security reviews shouldn’t slow down your team or your sales cycle. Leading companies use SafeBase to eliminate up to 98% of inbound security questionnaires, automate workflows, and accelerate pipeline. Go to safebase.io/podcast Check is the leading payroll infrastructure provider and pioneer of embedded payroll. Check makes it easy for any SaaS platform to build a payroll business, and already powers 60+ popular platforms. Head to checkhq.com/complex and tell them patio11 sent you.–Recommended in this episode:Azeem’s newsletter: https://www.exponentialview.co/ Azeem Azhar’s guest essay: The 19th-Century Technology That Threatens A.I. https://www.nytimes.com/2024/12/28/opinion/ai-electricity-power-plants.htmlElectric Twin: https://www.electrictwin.com/ Video of Elon Musk’s Colossus https://www.youtube.com/watch?v=Tw696JVSxJQ Complex Systems with Travis Dauwalter on the electrical grid: https://open.spotify.com/episode/5JY8e84sEXmHFlc8IR2kRb?si=35ymIC0UQ5SKdV8rrBcgIw Complex Systems with Austin Vernon on fracking: https://open.spotify.com/episode/0YDV1XyjUCM2RtuTcBGYH9?si=YshjUXPEQBiScNxrNaI-Gw Complex Systems with Casey Handmer on direct capture of CO2 to turn into hydrocarbon: https://open.spotify.com/episode/0GHegWgLSubYxvATmbWhQu?si=xNYBjn0ZTX2IT_pAZ5Ozsg –Twitter:@azeem@patio11–Timestamps:(00:00) Intro (00:27) The power economics of data centers(01:12) Historical infrastructure rollouts(04:58) The telecoms bubble (06:22) Unprecedented enterprise spend on AI capabilities(11:12) Let's have your LLM talk to my LLM(16:44) Is there a saturation point?(19:25) Sponsors: Safebase | Check(21:55) What’s in a data center?(24:52) The challenges of data centers(29:40) Geographical considerations for data centers(36:53) Energy consumption and future needs(40:48) Challenges in building transmission lines(41:35) The solar power learning curve(43:51) Small modular nuclear reactors(51:26) Geothermal energy and fracking(01:01:34) The future of AI and energy systems(01:12:57) Wrap
undefined
Feb 14, 2025 • 2h 44min

#212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anyway

Allan Dafoe, Director of Frontier Safety and Governance at Google DeepMind, dives into the unstoppable force of technology. He discusses how military and economic competition can push societies to adopt new technologies, often leading to a race against less cautious entities. Dafoe highlights the historical context of Japan's Meiji Restoration, demonstrating the urgency of technological adaptation. The conversation shifts to AI governance, stressing the need for collaboration to ensure safe AI advancements and addressing the complexities of AI alignment in our rapidly changing world.
undefined
Feb 14, 2025 • 1h 33min

Claude Cooperates! Exploring Cultural Evolution in LLM Societies, with Aron Vallinder & Edward Hughes

Edward Hughes from Google DeepMind and independent researcher Aron Vallinder dive into the cultural evolution of cooperation among AI agents. They discuss how different models, like Claude and GPT-4.0, exhibit unique cooperative behaviors in simulated environments. Their insights include the significance of communication in maintaining trust and the implications for societal impacts of autonomous AI. They also emphasize the importance of understanding externalities and the role of community engagement in shaping responsible AI development.
undefined
Jan 18, 2025 • 2h 2min

AI in 2030, Scaling Bottlenecks, and Explosive Growth

Podcast: Epoch After HoursEpisode: AI in 2030, Scaling Bottlenecks, and Explosive GrowthRelease date: 2025-01-16Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationIn our first episode of Epoch After Hours, Ege, Tamay and Jaime dig into what they expect AI to look like by 2030; why economists are underestimating the likelihood of explosive growth; the startling regularity in technological trends like Moore's Law; Moravec’s paradox, and how we might overcome it; and much more!
undefined
Jan 16, 2025 • 1h 13min

Ajeya Cotra on AI safety and the future of humanity

Podcast: AI Summer Episode: Ajeya Cotra on AI safety and the future of humanityRelease date: 2025-01-16Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationAjeya Cotra works at Open Philanthropy, a leading funder of efforts to combat existential risks from AI. She has led the foundation’s grantmaking on technical research to understand and reduce catastrophic risks from advanced AI. She is co-author of Planned Obsolescence, a newsletter about AI futurism and AI alignment.Although a committed doomer herself, Cotra has worked hard to understand the perspectives of AI safety skeptics. In this episode, we asked her to guide us through the contentious debate over AI safety and—perhaps—explain why people with similar views on other issues frequently reach divergent views on this one. We spoke to Cotra on December 10. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aisummer.org
undefined
Nov 30, 2024 • 2h 30min

Nora Belrose - AI Development, Safety, and Meaning

Podcast: Machine Learning Street Talk (MLST) Episode: Nora Belrose - AI Development, Safety, and MeaningRelease date: 2024-11-17Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationNora Belrose, Head of Interpretability Research at EleutherAI, discusses critical challenges in AI safety and development. The conversation begins with her technical work on concept erasure in neural networks through LEACE (LEAst-squares Concept Erasure), while highlighting how neural networks' progression from simple to complex learning patterns could have important implications for AI safety. Many fear that advanced AI will pose an existential threat -- pursuing its own dangerous goals once it's powerful enough. But Belrose challenges this popular doomsday scenario with a fascinating breakdown of why it doesn't add up. Belrose also provides a detailed critique of current AI alignment approaches, particularly examining "counting arguments" and their limitations when applied to AI safety. She argues that the Principle of Indifference may be insufficient for addressing existential risks from advanced AI systems. The discussion explores how emergent properties in complex AI systems could lead to unpredictable and potentially dangerous behaviors that simple reductionist approaches fail to capture. The conversation concludes by exploring broader philosophical territory, where Belrose discusses her growing interest in Buddhism's potential relevance to a post-automation future. She connects concepts of moral anti-realism with Buddhist ideas about emptiness and non-attachment, suggesting these frameworks might help humans find meaning in a world where AI handles most practical tasks. Rather than viewing this automated future with alarm, she proposes that Zen Buddhism's emphasis on spontaneity and presence might complement a society freed from traditional labor. SPONSOR MESSAGES: CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. https://centml.ai/pricing/ Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on ARC and AGI, they just acquired MindsAI - the current winners of the ARC challenge. Are you interested in working on ARC, or getting involved in their events? Goto https://tufalabs.ai/ Nora Belrose: https://norabelrose.com/ https://scholar.google.com/citations?user=p_oBc64AAAAJ&hl=en https://x.com/norabelrose SHOWNOTES: https://www.dropbox.com/scl/fi/38fhsv2zh8gnubtjaoq4a/NORA_FINAL.pdf?rlkey=0e5r8rd261821g1em4dgv0k70&st=t5c9ckfb&dl=0 TOC: 1. Neural Network Foundations [00:00:00] 1.1 Philosophical Foundations and Neural Network Simplicity Bias [00:02:20] 1.2 LEACE and Concept Erasure Fundamentals [00:13:16] 1.3 LISA Technical Implementation and Applications [00:18:50] 1.4 Practical Implementation Challenges and Data Requirements [00:22:13] 1.5 Performance Impact and Limitations of Concept Erasure 2. Machine Learning Theory [00:32:23] 2.1 Neural Network Learning Progression and Simplicity Bias [00:37:10] 2.2 Optimal Transport Theory and Image Statistics Manipulation [00:43:05] 2.3 Grokking Phenomena and Training Dynamics [00:44:50] 2.4 Texture vs Shape Bias in Computer Vision Models [00:45:15] 2.5 CNN Architecture and Shape Recognition Limitations 3. AI Systems and Value Learning [00:47:10] 3.1 Meaning, Value, and Consciousness in AI Systems [00:53:06] 3.2 Global Connectivity vs Local Culture Preservation [00:58:18] 3.3 AI Capabilities and Future Development Trajectory 4. Consciousness Theory [01:03:03] 4.1 4E Cognition and Extended Mind Theory [01:09:40] 4.2 Thompson's Views on Consciousness and Simulation [01:12:46] 4.3 Phenomenology and Consciousness Theory [01:15:43] 4.4 Critique of Illusionism and Embodied Experience [01:23:16] 4.5 AI Alignment and Counting Arguments Debate (TRUNCATED, TOC embedded in MP3 file with more information)

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app