The Valmy cover image

The Valmy

Latest episodes

undefined
Nov 30, 2024 • 2h 30min

Nora Belrose - AI Development, Safety, and Meaning

Podcast: Machine Learning Street Talk (MLST) Episode: Nora Belrose - AI Development, Safety, and MeaningRelease date: 2024-11-17Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationNora Belrose, Head of Interpretability Research at EleutherAI, discusses critical challenges in AI safety and development. The conversation begins with her technical work on concept erasure in neural networks through LEACE (LEAst-squares Concept Erasure), while highlighting how neural networks' progression from simple to complex learning patterns could have important implications for AI safety. Many fear that advanced AI will pose an existential threat -- pursuing its own dangerous goals once it's powerful enough. But Belrose challenges this popular doomsday scenario with a fascinating breakdown of why it doesn't add up. Belrose also provides a detailed critique of current AI alignment approaches, particularly examining "counting arguments" and their limitations when applied to AI safety. She argues that the Principle of Indifference may be insufficient for addressing existential risks from advanced AI systems. The discussion explores how emergent properties in complex AI systems could lead to unpredictable and potentially dangerous behaviors that simple reductionist approaches fail to capture. The conversation concludes by exploring broader philosophical territory, where Belrose discusses her growing interest in Buddhism's potential relevance to a post-automation future. She connects concepts of moral anti-realism with Buddhist ideas about emptiness and non-attachment, suggesting these frameworks might help humans find meaning in a world where AI handles most practical tasks. Rather than viewing this automated future with alarm, she proposes that Zen Buddhism's emphasis on spontaneity and presence might complement a society freed from traditional labor. SPONSOR MESSAGES: CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. https://centml.ai/pricing/ Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on ARC and AGI, they just acquired MindsAI - the current winners of the ARC challenge. Are you interested in working on ARC, or getting involved in their events? Goto https://tufalabs.ai/ Nora Belrose: https://norabelrose.com/ https://scholar.google.com/citations?user=p_oBc64AAAAJ&hl=en https://x.com/norabelrose SHOWNOTES: https://www.dropbox.com/scl/fi/38fhsv2zh8gnubtjaoq4a/NORA_FINAL.pdf?rlkey=0e5r8rd261821g1em4dgv0k70&st=t5c9ckfb&dl=0 TOC: 1. Neural Network Foundations [00:00:00] 1.1 Philosophical Foundations and Neural Network Simplicity Bias [00:02:20] 1.2 LEACE and Concept Erasure Fundamentals [00:13:16] 1.3 LISA Technical Implementation and Applications [00:18:50] 1.4 Practical Implementation Challenges and Data Requirements [00:22:13] 1.5 Performance Impact and Limitations of Concept Erasure 2. Machine Learning Theory [00:32:23] 2.1 Neural Network Learning Progression and Simplicity Bias [00:37:10] 2.2 Optimal Transport Theory and Image Statistics Manipulation [00:43:05] 2.3 Grokking Phenomena and Training Dynamics [00:44:50] 2.4 Texture vs Shape Bias in Computer Vision Models [00:45:15] 2.5 CNN Architecture and Shape Recognition Limitations 3. AI Systems and Value Learning [00:47:10] 3.1 Meaning, Value, and Consciousness in AI Systems [00:53:06] 3.2 Global Connectivity vs Local Culture Preservation [00:58:18] 3.3 AI Capabilities and Future Development Trajectory 4. Consciousness Theory [01:03:03] 4.1 4E Cognition and Extended Mind Theory [01:09:40] 4.2 Thompson's Views on Consciousness and Simulation [01:12:46] 4.3 Phenomenology and Consciousness Theory [01:15:43] 4.4 Critique of Illusionism and Embodied Experience [01:23:16] 4.5 AI Alignment and Counting Arguments Debate (TRUNCATED, TOC embedded in MP3 file with more information)
undefined
Sep 5, 2024 • 44min

The Road to Autonomous Intelligence with Andrej Karpathy

Podcast: No Priors: Artificial Intelligence | Technology | Startups Episode: The Road to Autonomous Intelligence with Andrej KarpathyRelease date: 2024-09-05Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationAndrej Karpathy joins Sarah and Elad in this week of No Priors. Andrej, who was a founding team member of OpenAI and former Senior Director of AI at Tesla, needs no introduction. In this episode, Andrej discusses the evolution of self-driving cars, comparing Tesla and Waymo’s approaches, and the technical challenges ahead. They also cover Tesla’s Optimus humanoid robot, the bottlenecks of AI development today, and  how AI capabilities could be further integrated with human cognition.  Andrej shares more about his new company Eureka Labs and his insights into AI-driven education, peer networks, and what young people should study to prepare for the reality ahead.Sign up for new podcasts every week. Email feedback to show@no-priors.comFollow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @KarpathyShow Notes: (0:00) Introduction(0:33) Evolution of self-driving cars(2:23) The Tesla  vs. Waymo approach to self-driving (6:32) Training Optimus  with automotive models(10:26) Reasoning behind the humanoid form factor(13:22) Existing challenges in robotics(16:12) Bottlenecks of AI progress (20:27) Parallels between human cognition and AI models(22:12) Merging human cognition with AI capabilities(27:10) Building high performance small models(30:33) Andrej’s current work in AI-enabled education(36:17) How AI-driven education reshapes knowledge networks and status(41:26) Eureka Labs(42:25) What young people study to prepare for the future
undefined
Aug 21, 2024 • 57min

Joscha Bach - AGI24 Keynote (Cyberanimism)

Podcast: Machine Learning Street Talk (MLST) Episode: Joscha Bach - AGI24 Keynote (Cyberanimism)Release date: 2024-08-21Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationDr. Joscha Bach introduces a surprising idea called "cyber animism" in his AGI-24 talk - the notion that nature might be full of self-organizing software agents, similar to the spirits in ancient belief systems. Bach suggests that consciousness could be a kind of software running on our brains, and wonders if similar "programs" might exist in plants or even entire ecosystems. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at https://brave.com/api. Joscha takes us on a tour de force through history, philosophy, and cutting-edge computer science, teasing us to rethink what we know about minds, machines, and the world around us. Joscha believes we should blur the lines between human, artificial, and natural intelligence, and argues that consciousness might be more widespread and interconnected than we ever thought possible. Dr. Joscha Bach https://x.com/Plinz This is video 2/9 from our coverage of AGI-24 in Seattle https://agi-conf.org/2024/ Watch the official MLST interview with Joscha which we did right after this talk on our Patreon now on early access - https://www.patreon.com/posts/joscha-bach-110199676 (you also get access to our private discord and biweekly calls) TOC: 00:00:00 Introduction: AGI and Cyberanimism 00:03:57 The Nature of Consciousness 00:08:46 Aristotle's Concepts of Mind and Consciousness 00:13:23 The Hard Problem of Consciousness 00:16:17 Functional Definition of Consciousness 00:20:24 Comparing LLMs and Human Consciousness 00:26:52 Testing for Consciousness in AI Systems 00:30:00 Animism and Software Agents in Nature 00:37:02 Plant Consciousness and Ecosystem Intelligence 00:40:36 The California Institute for Machine Consciousness 00:44:52 Ethics of Conscious AI and Suffering 00:46:29 Philosophical Perspectives on Consciousness 00:49:55 Q&A: Formalisms for Conscious Systems 00:53:27 Coherence, Self-Organization, and Compute Resources YT version (very high quality, filmed by us live) https://youtu.be/34VOI_oo-qM Refs: Aristotle's work on the soul and consciousness Richard Dawkins' work on genes and evolution Gerald Edelman's concept of Neural Darwinism Thomas Metzinger's book "Being No One" Yoshua Bengio's concept of the "consciousness prior" Stuart Hameroff's theories on microtubules and consciousness Christof Koch's work on consciousness Daniel Dennett's "Cartesian Theater" concept Giulio Tononi's Integrated Information Theory Mike Levin's work on organismal intelligence The concept of animism in various cultures Freud's model of the mind Buddhist perspectives on consciousness and meditation The Genesis creation narrative (for its metaphorical interpretation) California Institute for Machine Consciousness
undefined
Jul 27, 2024 • 38min

Nick Bostrom - AGI That Saves Room for Us (Worthy Successor Series, Episode 1)

Podcast: The TrajectoryEpisode: Nick Bostrom - AGI That Saves Room for Us (Worthy Successor Series, Episode 1)Release date: 2024-07-26Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationThis is an interview with Nick Bostrom, the Founding Director of Future of Humanity Institute Oxford.This is the first installment of The Worthy Successor series - where we unpack the preferable and non-preferable futures humanity might strive towards in the years ahead.This episode referred to the following other essays and resources:-- The Intelligence Trajectory Political Matrix: danfaggella.com/itpm -- Natural Selection Favors AIs over Humans: https://arxiv.org/abs/2303.16200-- The SDGs of Strong AGI: https://emerj.com/ai-power/sdgs-of-ai/Watch this episode on The Trajectory YouTube channel: https://youtu.be/_ZCE4XZ9doc?si=RXptg0y6JcxelXkFRead the Nick Bostrom's episode highlight: danfaggella.com/bostrom1/...There are three main questions we cover here on the Trajectory:1. Who are the power players in AGI and what are their incentives?2. What kind of posthuman future are we moving towards, or should we be moving towards?3. What should we do about it?If this sounds like it's up your alley, then be sure to stick around and connect:Blog: danfaggella.com/trajectory X: x.com/danfaggella LinkedIn: linkedin.com/in/danfaggella Newsletter: bit.ly/TrajectoryTw 
undefined
Jul 26, 2024 • 2h 2min

Patrick McKenzie - How a Discord Server Saved Thousands of Lives

Podcast: Dwarkesh Podcast Episode: Patrick McKenzie - How a Discord Server Saved Thousands of LivesRelease date: 2024-07-24Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationI talked with Patrick McKenzie (known online as patio11) about how a small team he ran over a Discord server got vaccines into Americans' arms: A story of broken incentives, outrageous incompetence, and how a few individuals with high agency saved 1000s of lives.Enjoy!Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.Follow me on Twitter for updates on future episodes.SponsorThis episode is brought to you by Stripe, financial infrastructure for the internet. Millions of companies from Anthropic to Amazon use Stripe to accept payments, automate financial processes and grow their revenue.Timestamps(00:00:00) – Why hackers on Discord had to save thousands of lives(00:17:26) – How politics crippled vaccine distribution(00:38:19) – Fundraising for VaccinateCA(00:51:09) – Why tech needs to understand how government works(00:58:58) – What is crypto good for?(01:13:07) – How the US government leverages big tech to violate rights(01:24:36) – Can the US have nice things like Japan?(01:26:41) – Financial plumbing & money laundering: a how-not-to guide(01:37:42) – Maximizing your value: why some people negotiate better(01:42:14) – Are young people too busy playing Factorio to found startups?(01:57:30) – The need for a post-mortem Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe
undefined
May 31, 2024 • 2h 49min

#189 – Rachel Glennerster on how “market shaping” could help solve climate change, pandemics, and other global problems

Podcast: 80,000 Hours Podcast Episode: #189 – Rachel Glennerster on how “market shaping” could help solve climate change, pandemics, and other global problemsRelease date: 2024-05-29Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarization"You can’t charge what something is worth during a pandemic. So we estimated that the value of one course of COVID vaccine in January 2021 was over $5,000. They were selling for between $6 and $40. So nothing like their social value. Now, don’t get me wrong. I don’t think that they should have charged $5,000 or $6,000. That’s not ethical. It’s also not economically efficient, because they didn’t cost $5,000 at the marginal cost. So you actually want low price, getting out to lots of people."But it shows you that the market is not going to reward people who do the investment in preparation for a pandemic — because when a pandemic hits, they’re not going to get the reward in line with the social value. They may even have to charge less than they would in a non-pandemic time. So prepping for a pandemic is not an efficient market strategy if I’m a firm, but it’s a very efficient strategy for society, and so we’ve got to bridge that gap." —Rachel GlennersterIn today’s episode, host Luisa Rodriguez speaks to Rachel Glennerster — associate professor of economics at the University of Chicago and a pioneer in the field of development economics — about how her team’s new Market Shaping Accelerator aims to leverage market forces to drive innovations that can solve pressing world problems.Links to learn more, highlights, and full transcript.They cover:How market failures and misaligned incentives stifle critical innovations for social goods like pandemic preparedness, climate change interventions, and vaccine development.How “pull mechanisms” like advance market commitments (AMCs) can help overcome these challenges — including concrete examples like how one AMC led to speeding up the development of three vaccines which saved around 700,000 lives in low-income countries.The challenges in designing effective pull mechanisms, from design to implementation.Why it’s important to tie innovation incentives to real-world impact and uptake, not just the invention of a new technology.The massive benefits of accelerating vaccine development, in some cases, even if it’s only by a few days or weeks.The case for a $6 billion advance market commitment to spur work on a universal COVID-19 vaccine.The shortlist of ideas from the Market Shaping Accelerator’s recent Innovation Challenge that use pull mechanisms to address market failures around improving indoor air quality, repurposing generic drugs for alternative uses, and developing eco-friendly air conditioners for a warming planet.“Best Buys” and “Bad Buys” for improving education systems in low- and middle-income countries, based on evidence from over 400 studies.Lessons from Rachel’s career at the forefront of global development, and how insights from economics can drive transformative change.And much more.Chapters:The Market Shaping Accelerator (00:03:33)Pull mechanisms for innovation (00:13:10)Accelerating the pneumococcal and COVID vaccines (00:19:05)Advance market commitments (00:41:46)Is this uncertainty hard for funders to plan around? (00:49:17)The story of the malaria vaccine that wasn’t (00:57:15)Challenges with designing and implementing AMCs and other pull mechanisms (01:01:40)Universal COVID vaccine (01:18:14)Climate-resilient crops (01:34:09)The Market Shaping Accelerator’s Innovation Challenge (01:45:40)Indoor air quality to reduce respiratory infections (01:49:09)Repurposing generic drugs (01:55:50)Clean air conditioning units (02:02:41)Broad-spectrum antivirals for pandemic prevention (02:09:11)Improving education in low- and middle-income countries (02:15:53)What’s still weird for Rachel about living in the US? (02:45:06)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour, Milo McGuire, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore
undefined
May 18, 2024 • 1h 44min

THE POLITICAL RIGHT & EQUALITY With MattMcManus

Podcast: Political Philosophy Podcast Episode: THE POLITICAL RIGHT & EQUALITY With MattMcManusRelease date: 2024-04-28Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationWhat defines the modern American right? Matt McManus argues we should understand the movement as fundementally about hierarchy, we then get into a general conversation about the Biden administration and the direction of the US Left.
undefined
May 13, 2024 • 2h 19min

David Thorstad: Bounded Rationality and the Case Against Longtermism

Podcast: The Gradient: Perspectives on AI Episode: David Thorstad: Bounded Rationality and the Case Against LongtermismRelease date: 2024-05-02Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationEpisode 122I spoke with Professor David Thorstad about:* The practical difficulties of doing interdisciplinary work* Why theories of human rationality should account for boundedness, heuristics, and other cognitive limitations* why EA epistemics suck (ok, it’s a little more nuanced than that)Professor Thorstad is an Assistant Professor of Philosophy at Vanderbilt University, a Senior Research Affiliate at the Global Priorities Institute at Oxford, and a Research Affiliate at the MINT Lab at Australian National University. One strand of his research asks how cognitively limited agents should decide what to do and believe. A second strand asks how altruists should use limited funds to do good effectively.Reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:15) David’s interest in rationality* (02:45) David’s crisis of confidence, models abstracted from psychology* (05:00) Blending formal models with studies of the mind* (06:25) Interaction between academic communities* (08:24) Recognition of and incentives for interdisciplinary work* (09:40) Movement towards interdisciplinary work* (12:10) The Standard Picture of rationality* (14:11) Why the Standard Picture was attractive* (16:30) Violations of and rebellion against the Standard Picture* (19:32) Mistakes made by critics of the Standard Picture* (22:35) Other competing programs vs Standard Picture* (26:27) Characterizing Bounded Rationality* (27:00) A worry: faculties criticizing themselves* (29:28) Self-improving critique and longtermism* (30:25) Central claims in bounded rationality and controversies* (32:33) Heuristics and formal theorizing* (35:02) Violations of Standard Picture, vindicatory epistemology* (37:03) The Reason Responsive Consequentialist View (RRCV)* (38:30) Objective and subjective pictures* (41:35) Reason responsiveness* (43:37) There are no epistemic norms for inquiry* (44:00) Norms vs reasons* (45:15) Arguments against epistemic nihilism for belief* (47:30) Norms and self-delusion* (49:55) Difficulty of holding beliefs for pragmatic reasons* (50:50) The Gibbardian picture, inquiry as an action* (52:15) Thinking how to act and thinking how to live — the power of inquiry* (53:55) Overthinking and conducting inquiry* (56:30) Is thinking how to inquire as an all-things-considered matter?* (58:00) Arguments for the RRCV* (1:00:40) Deciding on minimal criteria for the view, stereotyping* (1:02:15) Eliminating stereotypes from the theory* (1:04:20) Theory construction in epistemology and moral intuition* (1:08:20) Refusing theories for moral reasons and disciplinary boundaries* (1:10:30) The argument from minimal criteria, evaluating against competing views* (1:13:45) Comparing to other theories* (1:15:00) The explanatory argument* (1:17:53) Parfit and Railton, norms of friendship vs utility* (1:20:00) Should you call out your friend for being a womanizer* (1:22:00) Vindicatory Epistemology* (1:23:05) Panglossianism and meliorative epistemology* (1:24:42) Heuristics and recognition-driven investigation* (1:26:33) Rational inquiry leading to irrational beliefs — metacognitive processing* (1:29:08) Stakes of inquiry and costs of metacognitive processing* (1:30:00) When agents are incoherent, focuses on inquiry* (1:32:05) Indirect normative assessment and its consequences* (1:37:47) Against the Singularity Hypothesis* (1:39:00) Superintelligence and the ontological argument* (1:41:50) Hardware growth and general intelligence growth, AGI definitions* (1:43:55) Difficulties in arguing for hyperbolic growth* (1:46:07) Chalmers and the proportionality argument* (1:47:53) Arguments for/against diminishing growth, research productivity, Moore’s Law* (1:50:08) On progress studies* (1:52:40) Improving research productivity and technology growth* (1:54:00) Mistakes in the moral mathematics of existential risk, longtermist epistemics* (1:55:30) Cumulative and per-unit risk* (1:57:37) Back and forth with longtermists, time of perils* (1:59:05) Background risk — risks we can and can’t intervene on, total existential risk* (2:00:56) The case for longtermism is inflated* (2:01:40) Epistemic humility and longtermism* (2:03:15) Knowledge production — reliable sources, blog posts vs peer review* (2:04:50) Compounding potential errors in knowledge* (2:06:38) Group deliberation dynamics, academic consensus* (2:08:30) The scope of longtermism* (2:08:30) Money in effective altruism and processes of inquiry* (2:10:15) Swamping longtermist options* (2:12:00) Washing out arguments and justified belief* (2:13:50) The difficulty of long-term forecasting and interventions* (2:15:50) Theory of change in the bounded rationality program* (2:18:45) OutroLinks:* David’s homepage and Twitter and blog* Papers mentioned/read* Bounded rationality and inquiry* Why bounded rationality (in epistemology)?* Against the newer evidentialists* The accuracy-coherence tradeoff in cognition* There are no epistemic norms of inquiry* Permissive metaepistemology* Global priorities and effective altruism* What David likes about EA* Against the singularity hypothesis (+ blog posts* Three mistakes in the moral mathematics of existential risk (+ blog posts* The scope of longtermism* Epistemics Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
Apr 17, 2024 • 1h 15min

Peter Thiel on Political Theology

Podcast: Conversations with Tyler Episode: Peter Thiel on Political TheologyRelease date: 2024-04-17Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationIn this conversation recorded live in Miami, Tyler and Peter Thiel dive deep into the complexities of political theology, including why it’s a concept we still need today, why Peter’s against Calvinism (and rationalism), whether the Old Testament should lead us to be woke, why Carl Schmitt is enjoying a resurgence, whether we’re entering a new age of millenarian thought, the one existential risk Peter thinks we’re overlooking, why everyone just muddling through leads to disaster, the role of the katechon, the political vision in Shakespeare, how AI will affect the influence of wordcels, Straussian messages in the Bible, what worries Peter about Miami, and more. Read a full transcript enhanced with helpful links, or watch the full video. Recorded February 21st, 2024. Other ways to connect Follow us on X and Instagram Follow Tyler on X Follow Peter on X Sign up for our newsletter Join our Discord Email us: cowenconvos@mercatus.gmu.edu Learn more about Conversations with Tyler and other Mercatus Center podcasts here.
undefined
Apr 2, 2024 • 1h 25min

#361 — Sam Bankman-Fried & Effective Altruism

Podcast: Making Sense with Sam Harris Episode: #361 — Sam Bankman-Fried & Effective AltruismRelease date: 2024-04-01Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationSam Harris speaks with William MacAskill about the implosion of FTX and the effect that it has had on the Effective Altruism movement. They discuss the logic of “earning to give,” the mind of SBF, his philanthropy, the character of the EA community, potential problems with focusing on long-term outcomes, AI risk, the effects of the FTX collapse on Will personally, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.   Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode