The Valmy cover image

The Valmy

Latest episodes

undefined
Mar 11, 2025 • 1h 14min

AI, data centers, and power economics, with Azeem Azhar

Podcast: Complex Systems with Patrick McKenzie (patio11) Episode: AI, data centers, and power economics, with Azeem AzharRelease date: 2025-02-27Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationPatrick McKenzie (patio11) is joined by Azeem Azhar, writer of the Exponential View newsletter, to discuss the massive data center buildout powering AI and its implications for our energy infrastructure. The conversation covers the physical limitations of modern datacenters, the challenges of electricity generation, the societal ripples from historical largescale infrastructure investments like railways and telecommunications, and the future of energy including solar, nuclear and geothermal power. Through their discussion, Patrick and Azeem explain why our mental models for both computing and energy systems need to be updated.–Full transcript available here: www.complexsystemspodcast.com/ai-llm-data-center-power-economics/–Sponsors:  Safebase | CheckReady to save time and close deals faster? Inbound security reviews shouldn’t slow down your team or your sales cycle. Leading companies use SafeBase to eliminate up to 98% of inbound security questionnaires, automate workflows, and accelerate pipeline. Go to safebase.io/podcast Check is the leading payroll infrastructure provider and pioneer of embedded payroll. Check makes it easy for any SaaS platform to build a payroll business, and already powers 60+ popular platforms. Head to checkhq.com/complex and tell them patio11 sent you.–Recommended in this episode:Azeem’s newsletter: https://www.exponentialview.co/ Azeem Azhar’s guest essay: The 19th-Century Technology That Threatens A.I. https://www.nytimes.com/2024/12/28/opinion/ai-electricity-power-plants.htmlElectric Twin: https://www.electrictwin.com/ Video of Elon Musk’s Colossus https://www.youtube.com/watch?v=Tw696JVSxJQ Complex Systems with Travis Dauwalter on the electrical grid: https://open.spotify.com/episode/5JY8e84sEXmHFlc8IR2kRb?si=35ymIC0UQ5SKdV8rrBcgIw Complex Systems with Austin Vernon on fracking: https://open.spotify.com/episode/0YDV1XyjUCM2RtuTcBGYH9?si=YshjUXPEQBiScNxrNaI-Gw Complex Systems with Casey Handmer on direct capture of CO2 to turn into hydrocarbon: https://open.spotify.com/episode/0GHegWgLSubYxvATmbWhQu?si=xNYBjn0ZTX2IT_pAZ5Ozsg –Twitter:@azeem@patio11–Timestamps:(00:00) Intro (00:27) The power economics of data centers(01:12) Historical infrastructure rollouts(04:58) The telecoms bubble (06:22) Unprecedented enterprise spend on AI capabilities(11:12) Let's have your LLM talk to my LLM(16:44) Is there a saturation point?(19:25) Sponsors: Safebase | Check(21:55) What’s in a data center?(24:52) The challenges of data centers(29:40) Geographical considerations for data centers(36:53) Energy consumption and future needs(40:48) Challenges in building transmission lines(41:35) The solar power learning curve(43:51) Small modular nuclear reactors(51:26) Geothermal energy and fracking(01:01:34) The future of AI and energy systems(01:12:57) Wrap
undefined
Feb 14, 2025 • 2h 44min

#212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anyway

Allan Dafoe, Director of Frontier Safety and Governance at Google DeepMind, dives into the unstoppable force of technology. He discusses how military and economic competition can push societies to adopt new technologies, often leading to a race against less cautious entities. Dafoe highlights the historical context of Japan's Meiji Restoration, demonstrating the urgency of technological adaptation. The conversation shifts to AI governance, stressing the need for collaboration to ensure safe AI advancements and addressing the complexities of AI alignment in our rapidly changing world.
undefined
Feb 14, 2025 • 1h 33min

Claude Cooperates! Exploring Cultural Evolution in LLM Societies, with Aron Vallinder & Edward Hughes

Podcast: "The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis Episode: Claude Cooperates! Exploring Cultural Evolution in LLM Societies, with Aron Vallinder & Edward HughesRelease date: 2025-02-12Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationIn this episode, Edward Hughes, researcher at Google DeepMind, and Aron Vallinder, an independent researcher and PIBBSS fellow discuss their pioneering research on cultural evolution and cooperation among large language model agents. The conversation delves into the study's design, exploring how different AI models exhibit cooperative behavior in simulated environments, the implications of these findings for future AI development, and the potential societal impacts of autonomous AI agents. They elaborate on their experimental setup involving different LLMs like Claude, Gemini 1.5, and GPT-4.0 in a cooperative donor-recipient game, shedding light on how various AI models handle cooperation and their potential societal impacts. Key points include the importance of understanding externalities, the role of punishment and communication, and future research directions involving mixed-model societies and human-AI interactions. The episode invites listeners to engage in this fast-growing field, stressing the need for more hands-on research and empirical evidence to navigate the rapidly evolving AI landscape.Link to Aron & Edward's research paper "Cultural Evolution of Cooperation among LLMAgents"SPONSORS:Oracle Cloud Infrastructure (OCI): Oracle's next-generation cloud platform delivers blazing-fast AI and ML performance with 50% less for compute and 80% less for outbound networking compared to other cloud providers. OCI powers industry leaders like Vodafone and Thomson Reuters with secure infrastructure and application development capabilities. New U.S. customers can get their cloud bill cut in half by switching to OCI before March 31, 2024 at https://oracle.com/cognitiveNetSuite: Over 41,000 businesses trust NetSuite by Oracle, the #1 cloud ERP, to future-proof their operations. With a unified platform for accounting, financial management, inventory, and HR, NetSuite provides real-time insights and forecasting to help you make quick, informed decisions. Whether you're earning millions or hundreds of millions, NetSuite empowers you to tackle challenges and seize opportunities. Download the free CFO's guide to AI and machine learning at https://netsuite.com/cognitiveShopify: Shopify is revolutionizing online selling with its market-leading checkout system and robust API ecosystem. Its exclusive library of cutting-edge AI apps empowers e-commerce businesses to thrive in a competitive market. Cognitive Revolution listeners can try Shopify for just $1 per month at https://shopify.com/cognitiveCHAPTERS:(00:00) Teaser(00:42) About the Episode(03:26) Introduction(03:40) The Rapid Evolution of AI(04:58) Human Cooperation and Society(07:03) Cultural Evolution and Stories(08:39) Mechanisms of Cultural Evolution (Part 1)(20:56) Sponsors: Oracle Cloud Infrastructure (OCI) | NetSuite(23:35) Mechanisms of Cultural Evolution (Part 2)(27:07) Experimental Setup: Donor Game (Part 1)(37:35) Sponsors: Shopify(38:55) Experimental Setup: Donor Game (Part 2)(44:32) Exploring AI Societies: Claude, Gemini, and GPT-4(45:50) Striking Graphical Differences(48:08) Experiment Results and Implications(50:54) Prompt Engineering and Cooperation(57:40) Mixed Model Societies(01:00:35) Future Research Directions(01:03:10) Human-AI Interaction and Influence(01:05:20) Complexifying AI Games(01:18:14) Evaluations and Feedback Loops(01:20:50) Open Source and AI Safety(01:23:23) Reflections and Future Work(01:30:04) Outro
undefined
Jan 18, 2025 • 2h 2min

AI in 2030, Scaling Bottlenecks, and Explosive Growth

Podcast: Epoch After HoursEpisode: AI in 2030, Scaling Bottlenecks, and Explosive GrowthRelease date: 2025-01-16Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationIn our first episode of Epoch After Hours, Ege, Tamay and Jaime dig into what they expect AI to look like by 2030; why economists are underestimating the likelihood of explosive growth; the startling regularity in technological trends like Moore's Law; Moravec’s paradox, and how we might overcome it; and much more!
undefined
Jan 16, 2025 • 1h 13min

Ajeya Cotra on AI safety and the future of humanity

Podcast: AI Summer Episode: Ajeya Cotra on AI safety and the future of humanityRelease date: 2025-01-16Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationAjeya Cotra works at Open Philanthropy, a leading funder of efforts to combat existential risks from AI. She has led the foundation’s grantmaking on technical research to understand and reduce catastrophic risks from advanced AI. She is co-author of Planned Obsolescence, a newsletter about AI futurism and AI alignment.Although a committed doomer herself, Cotra has worked hard to understand the perspectives of AI safety skeptics. In this episode, we asked her to guide us through the contentious debate over AI safety and—perhaps—explain why people with similar views on other issues frequently reach divergent views on this one. We spoke to Cotra on December 10. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aisummer.org
undefined
Nov 30, 2024 • 2h 30min

Nora Belrose - AI Development, Safety, and Meaning

Podcast: Machine Learning Street Talk (MLST) Episode: Nora Belrose - AI Development, Safety, and MeaningRelease date: 2024-11-17Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationNora Belrose, Head of Interpretability Research at EleutherAI, discusses critical challenges in AI safety and development. The conversation begins with her technical work on concept erasure in neural networks through LEACE (LEAst-squares Concept Erasure), while highlighting how neural networks' progression from simple to complex learning patterns could have important implications for AI safety. Many fear that advanced AI will pose an existential threat -- pursuing its own dangerous goals once it's powerful enough. But Belrose challenges this popular doomsday scenario with a fascinating breakdown of why it doesn't add up. Belrose also provides a detailed critique of current AI alignment approaches, particularly examining "counting arguments" and their limitations when applied to AI safety. She argues that the Principle of Indifference may be insufficient for addressing existential risks from advanced AI systems. The discussion explores how emergent properties in complex AI systems could lead to unpredictable and potentially dangerous behaviors that simple reductionist approaches fail to capture. The conversation concludes by exploring broader philosophical territory, where Belrose discusses her growing interest in Buddhism's potential relevance to a post-automation future. She connects concepts of moral anti-realism with Buddhist ideas about emptiness and non-attachment, suggesting these frameworks might help humans find meaning in a world where AI handles most practical tasks. Rather than viewing this automated future with alarm, she proposes that Zen Buddhism's emphasis on spontaneity and presence might complement a society freed from traditional labor. SPONSOR MESSAGES: CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. https://centml.ai/pricing/ Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on ARC and AGI, they just acquired MindsAI - the current winners of the ARC challenge. Are you interested in working on ARC, or getting involved in their events? Goto https://tufalabs.ai/ Nora Belrose: https://norabelrose.com/ https://scholar.google.com/citations?user=p_oBc64AAAAJ&hl=en https://x.com/norabelrose SHOWNOTES: https://www.dropbox.com/scl/fi/38fhsv2zh8gnubtjaoq4a/NORA_FINAL.pdf?rlkey=0e5r8rd261821g1em4dgv0k70&st=t5c9ckfb&dl=0 TOC: 1. Neural Network Foundations [00:00:00] 1.1 Philosophical Foundations and Neural Network Simplicity Bias [00:02:20] 1.2 LEACE and Concept Erasure Fundamentals [00:13:16] 1.3 LISA Technical Implementation and Applications [00:18:50] 1.4 Practical Implementation Challenges and Data Requirements [00:22:13] 1.5 Performance Impact and Limitations of Concept Erasure 2. Machine Learning Theory [00:32:23] 2.1 Neural Network Learning Progression and Simplicity Bias [00:37:10] 2.2 Optimal Transport Theory and Image Statistics Manipulation [00:43:05] 2.3 Grokking Phenomena and Training Dynamics [00:44:50] 2.4 Texture vs Shape Bias in Computer Vision Models [00:45:15] 2.5 CNN Architecture and Shape Recognition Limitations 3. AI Systems and Value Learning [00:47:10] 3.1 Meaning, Value, and Consciousness in AI Systems [00:53:06] 3.2 Global Connectivity vs Local Culture Preservation [00:58:18] 3.3 AI Capabilities and Future Development Trajectory 4. Consciousness Theory [01:03:03] 4.1 4E Cognition and Extended Mind Theory [01:09:40] 4.2 Thompson's Views on Consciousness and Simulation [01:12:46] 4.3 Phenomenology and Consciousness Theory [01:15:43] 4.4 Critique of Illusionism and Embodied Experience [01:23:16] 4.5 AI Alignment and Counting Arguments Debate (TRUNCATED, TOC embedded in MP3 file with more information)
undefined
Sep 5, 2024 • 44min

The Road to Autonomous Intelligence with Andrej Karpathy

Podcast: No Priors: Artificial Intelligence | Technology | Startups Episode: The Road to Autonomous Intelligence with Andrej KarpathyRelease date: 2024-09-05Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationAndrej Karpathy joins Sarah and Elad in this week of No Priors. Andrej, who was a founding team member of OpenAI and former Senior Director of AI at Tesla, needs no introduction. In this episode, Andrej discusses the evolution of self-driving cars, comparing Tesla and Waymo’s approaches, and the technical challenges ahead. They also cover Tesla’s Optimus humanoid robot, the bottlenecks of AI development today, and  how AI capabilities could be further integrated with human cognition.  Andrej shares more about his new company Eureka Labs and his insights into AI-driven education, peer networks, and what young people should study to prepare for the reality ahead.Sign up for new podcasts every week. Email feedback to show@no-priors.comFollow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @KarpathyShow Notes: (0:00) Introduction(0:33) Evolution of self-driving cars(2:23) The Tesla  vs. Waymo approach to self-driving (6:32) Training Optimus  with automotive models(10:26) Reasoning behind the humanoid form factor(13:22) Existing challenges in robotics(16:12) Bottlenecks of AI progress (20:27) Parallels between human cognition and AI models(22:12) Merging human cognition with AI capabilities(27:10) Building high performance small models(30:33) Andrej’s current work in AI-enabled education(36:17) How AI-driven education reshapes knowledge networks and status(41:26) Eureka Labs(42:25) What young people study to prepare for the future
undefined
Aug 21, 2024 • 57min

Joscha Bach - AGI24 Keynote (Cyberanimism)

Podcast: Machine Learning Street Talk (MLST) Episode: Joscha Bach - AGI24 Keynote (Cyberanimism)Release date: 2024-08-21Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationDr. Joscha Bach introduces a surprising idea called "cyber animism" in his AGI-24 talk - the notion that nature might be full of self-organizing software agents, similar to the spirits in ancient belief systems. Bach suggests that consciousness could be a kind of software running on our brains, and wonders if similar "programs" might exist in plants or even entire ecosystems. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at https://brave.com/api. Joscha takes us on a tour de force through history, philosophy, and cutting-edge computer science, teasing us to rethink what we know about minds, machines, and the world around us. Joscha believes we should blur the lines between human, artificial, and natural intelligence, and argues that consciousness might be more widespread and interconnected than we ever thought possible. Dr. Joscha Bach https://x.com/Plinz This is video 2/9 from our coverage of AGI-24 in Seattle https://agi-conf.org/2024/ Watch the official MLST interview with Joscha which we did right after this talk on our Patreon now on early access - https://www.patreon.com/posts/joscha-bach-110199676 (you also get access to our private discord and biweekly calls) TOC: 00:00:00 Introduction: AGI and Cyberanimism 00:03:57 The Nature of Consciousness 00:08:46 Aristotle's Concepts of Mind and Consciousness 00:13:23 The Hard Problem of Consciousness 00:16:17 Functional Definition of Consciousness 00:20:24 Comparing LLMs and Human Consciousness 00:26:52 Testing for Consciousness in AI Systems 00:30:00 Animism and Software Agents in Nature 00:37:02 Plant Consciousness and Ecosystem Intelligence 00:40:36 The California Institute for Machine Consciousness 00:44:52 Ethics of Conscious AI and Suffering 00:46:29 Philosophical Perspectives on Consciousness 00:49:55 Q&A: Formalisms for Conscious Systems 00:53:27 Coherence, Self-Organization, and Compute Resources YT version (very high quality, filmed by us live) https://youtu.be/34VOI_oo-qM Refs: Aristotle's work on the soul and consciousness Richard Dawkins' work on genes and evolution Gerald Edelman's concept of Neural Darwinism Thomas Metzinger's book "Being No One" Yoshua Bengio's concept of the "consciousness prior" Stuart Hameroff's theories on microtubules and consciousness Christof Koch's work on consciousness Daniel Dennett's "Cartesian Theater" concept Giulio Tononi's Integrated Information Theory Mike Levin's work on organismal intelligence The concept of animism in various cultures Freud's model of the mind Buddhist perspectives on consciousness and meditation The Genesis creation narrative (for its metaphorical interpretation) California Institute for Machine Consciousness
undefined
Jul 27, 2024 • 38min

Nick Bostrom - AGI That Saves Room for Us (Worthy Successor Series, Episode 1)

Podcast: The TrajectoryEpisode: Nick Bostrom - AGI That Saves Room for Us (Worthy Successor Series, Episode 1)Release date: 2024-07-26Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationThis is an interview with Nick Bostrom, the Founding Director of Future of Humanity Institute Oxford.This is the first installment of The Worthy Successor series - where we unpack the preferable and non-preferable futures humanity might strive towards in the years ahead.This episode referred to the following other essays and resources:-- The Intelligence Trajectory Political Matrix: danfaggella.com/itpm -- Natural Selection Favors AIs over Humans: https://arxiv.org/abs/2303.16200-- The SDGs of Strong AGI: https://emerj.com/ai-power/sdgs-of-ai/Watch this episode on The Trajectory YouTube channel: https://youtu.be/_ZCE4XZ9doc?si=RXptg0y6JcxelXkFRead the Nick Bostrom's episode highlight: danfaggella.com/bostrom1/...There are three main questions we cover here on the Trajectory:1. Who are the power players in AGI and what are their incentives?2. What kind of posthuman future are we moving towards, or should we be moving towards?3. What should we do about it?If this sounds like it's up your alley, then be sure to stick around and connect:Blog: danfaggella.com/trajectory X: x.com/danfaggella LinkedIn: linkedin.com/in/danfaggella Newsletter: bit.ly/TrajectoryTw 
undefined
Jul 26, 2024 • 2h 2min

Patrick McKenzie - How a Discord Server Saved Thousands of Lives

Podcast: Dwarkesh Podcast Episode: Patrick McKenzie - How a Discord Server Saved Thousands of LivesRelease date: 2024-07-24Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationI talked with Patrick McKenzie (known online as patio11) about how a small team he ran over a Discord server got vaccines into Americans' arms: A story of broken incentives, outrageous incompetence, and how a few individuals with high agency saved 1000s of lives.Enjoy!Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.Follow me on Twitter for updates on future episodes.SponsorThis episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue.Timestamps(00:00:00) – Why hackers on Discord had to save thousands of lives(00:17:26) – How politics crippled vaccine distribution(00:38:19) – Fundraising for VaccinateCA(00:51:09) – Why tech needs to understand how government works(00:58:58) – What is crypto good for?(01:13:07) – How the US government leverages big tech to violate rights(01:24:36) – Can the US have nice things like Japan?(01:26:41) – Financial plumbing & money laundering: a how-not-to guide(01:37:42) – Maximizing your value: why some people negotiate better(01:42:14) – Are young people too busy playing Factorio to found startups?(01:57:30) – The need for a post-mortem Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode