undefined

Nick Bostrom

Philosopher and author known for his work on existential risk and the Vulnerable World Hypothesis.

Top 10 podcasts with Nick Bostrom

Ranked by the Snipd community
undefined
77 snips
Sep 30, 2024 • 39min

#385 — AI Utopia

Nick Bostrom, a prominent philosopher renowned for his insights into future technologies, joins the conversation on the profound implications of artificial intelligence. They delve into the existential risks posed by superintelligent AI and the challenges of aligning its values with human goals. Bostrom explores the philosophical dilemmas surrounding a technologically driven utopia, job automation, and the redefinition of meaning and purpose in human life. The discussion raises critical ethical questions about the future of humanity as we navigate a rapidly evolving landscape.
undefined
77 snips
Nov 22, 2022 • 1h 8min

Making Sense of Artificial Intelligence | Episode 1 of The Essential Sam Harris

Filmmaker Jay Shapiro has produced a new series of audio documentaries, exploring the major topics that Sam has focused on over the course of his career. Each episode weaves together original analysis, critical perspective, and novel thought experiments with some of the most compelling exchanges from the Making Sense archive. Whether you are new to a particular topic, or think you have your mind made up about it, we think you’ll find this series fascinating. In this episode, we explore the landscape of Artificial Intelligence. We’ll listen in on Sam’s conversation with decision theorist and artificial-intelligence researcher Eliezer Yudkowsky, as we consider the potential dangers of AI – including the control problem and the value-alignment problem – as well as the concepts of Artificial General Intelligence, Narrow Artificial Intelligence, and Artificial Super Intelligence. We’ll then be introduced to philosopher Nick Bostrom’s “Genies, Sovereigns, Oracles, and Tools,” as physicist Max Tegmark outlines just how careful we need to be as we travel down the AI path. Computer scientist Stuart Russell will then dig deeper into the value-alignment problem and explain its importance.   We’ll hear from former Google CEO Eric Schmidt about the geopolitical realities of AI terrorism and weaponization. We’ll then touch the topic of consciousness as Sam and psychologist Paul Bloom turn the conversation to the ethical and psychological complexities of living alongside humanlike AI. Psychologist Alison Gopnik then reframes the general concept of intelligence to help us wonder if the kinds of systems we’re building using “Deep Learning” are really marching us towards our super-intelligent overlords.   Finally, physicist David Deutsch will argue that many value-alignment fears about AI are based on a fundamental misunderstanding about how knowledge actually grows in this universe.
undefined
60 snips
Aug 6, 2024 • 1h 35min

Life Will Get Weird The Next 3 Years | Nick Bostrom

Nick Bostrom, a leading philosopher and AI expert, dives into the profound societal implications of advanced artificial intelligence. He raises critical questions about moral responsibilities towards AI and the potential for centralization of power. The conversation explores how automation may reshape human purpose, social status, and relationships. Bostrom discusses his book, 'Deep Utopia,' envisioning a future where technology complicates our understanding of fulfillment. This thought-provoking discussion navigates the ethical dilemmas of a rapidly evolving tech landscape.
undefined
43 snips
Jun 29, 2024 • 1h 33min

#803 - Nick Bostrom - Are We Headed For AI Utopia Or Disaster?

Nick Bostrom, a philosopher, discusses living in a perfectly solved world, AI safety, future of religion, and potential outcomes. The podcast explores balancing optimism and pessimism in AI development, envisions a post-work society, and discusses utopian societal dynamics. It also dives into interestingness with extreme longevity, economic growth impact, and the precarious future of humanity with AI technology.
undefined
39 snips
May 29, 2024 • 49min

John Searle - Consciousness as a Problem in Philosophy and Neurobiology [Reupload]

John Searle, a leading philosopher of mind famous for his critique of machine intelligence, engages with Nick Bostrom, an AI safety expert. They dissect the nature of consciousness, rejecting fears of machines gaining self-awareness. Searle argues that machines lack the necessary semantics to possess true motivation or understanding. The conversation explores the distinctions between subjective and objective experiences, blindsight phenomena, and the complexities of visual perception. Their insights challenge contemporary misconceptions about AI and consciousness.
undefined
26 snips
Sep 1, 2024 • 1h 2min

#81 Nick Bostrom - How To Prevent AI Catastrophe

In this engaging conversation, philosopher Nick Bostrom, known for his insightful work on existential risks, dives deep into the future of AI. He discusses the balance between harnessing AI's potential and mitigating its dangers. The conversation explores how advanced AI might create a 'solved world' and the existential questions that arise from automation. Bostrom emphasizes ethical responsibilities towards AI consciousness and urges thoughtful design to ensure human values align with AI behavior, paving the way for responsible coexistence.
undefined
23 snips
Sep 30, 2024 • 1h 28min

#385 - AI Utopia

Nick Bostrom, a renowned philosopher and director of Oxford's Future of Humanity Institute, dives into the complex landscape of artificial intelligence. He and Sam Harris tackle the misalignment risks of superintelligent AI and the ethical challenges that arise in a tech-driven future. They explore the idea of a 'solved world,' where automation alters labor and leisure, causing us to rethink human fulfillment. Bostrom also raises concerns about digital isolation and the philosophical implications of pleasure in a world shaped by rapid advancements.
undefined
20 snips
Apr 12, 2023 • 50min

Making Sense of Existential Threat and Nuclear War | Episode 7 of The Essential Sam Harris

In this episode, we examine the topic of existential threat, focusing in particular on the subject of nuclear war. Sam opens the discussion by emphasizing the gravity of our ability to destroy life as we know it at any moment, and how shocking it is that nearly all of us perpetually ignore this fact. Philosopher Nick Bostrom expands on this idea by explaining how developing technologies like DNA synthesis could make humanity more vulnerable to malicious actors. Sam and historian Fred Kaplan then guide us through a hypothetical timeline of events following a nuclear first strike, highlighting the flaws in the concept of nuclear deterrence. Former Defense Secretary William J. Perry echoes these concerns, painting a grim picture of his "nuclear nightmare" scenario: a nuclear terrorist attack. Zooming out, Toby Ord outlines each potential extinction-level threat, and why he believes that, between all of them, we face a one in six chance of witnessing the downfall of our species. Our episode ends on a cautiously optimistic note, however, as Yuval Noah Harari shares his thoughts on "global myth-making" and its potential role in helping us navigate through these perilous times.   About the Series Filmmaker Jay Shapiro has produced The Essential Sam Harris, a new series of audio documentaries exploring the major topics that Sam has focused on over the course of his career. Each episode weaves together original analysis, critical perspective, and novel thought experiments with some of the most compelling exchanges from the Making Sense archive. Whether you are new to a particular topic, or think you have your mind made up about it, we think you’ll find this series fascinating.  
undefined
19 snips
Aug 24, 2024 • 1h 4min

The path to utopia (with Nick Bostrom)

Nick Bostrom, a renowned philosopher and founder of the Macrostrategy Research Initiative, discusses why dystopian scenarios dominate our imagination over utopias. He explores the challenges of defining ideal societies and the paradox of abundance amidst scarcity. The conversation includes the impact of AI on human purpose, how advanced technology might reshape emotional connections, and the ethical governance needed to navigate future risks. Bostrom also ponders profound questions like our potential future in a simulation and the quest for deeper fulfillment in life.
undefined
16 snips
Mar 26, 2020 • 1h 57min

#83 – Nick Bostrom: Simulation and Superintelligence

Nick Bostrom is a philosopher at University of Oxford and the director of the Future of Humanity Institute. He has worked on fascinating and important ideas in existential risks, simulation hypothesis, human enhancement ethics, and the risks of superintelligent AI systems, including in his book Superintelligence. I can see talking to Nick multiple times on this podcast, many hours each time, but we have to start somewhere. Support this podcast by signing up with these sponsors: – Cash App – use code “LexPodcast” and download: – Cash App (App Store): https://apple.co/2sPrUHe – Cash App (Google Play): https://bit.ly/2MlvP5w EPISODE LINKS: Nick’s website: https://nickbostrom.com/ Future of Humanity Institute: – https://twitter.com/fhioxford – https://www.fhi.ox.ac.uk/ Books: – Superintelligence: https://amzn.to/2JckX83 Wikipedia: – https://en.wikipedia.org/wiki/Simulation_hypothesis – https://en.wikipedia.org/wiki/Principle_of_indifference – https://en.wikipedia.org/wiki/Doomsday_argument – https://en.wikipedia.org/wiki/Global_catastrophic_risk This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 – Introduction 02:48 – Simulation hypothesis and simulation argument 12:17 – Technologically mature civilizations 15:30 – Case 1: if something kills all possible civilizations 19:08 – Case 2: if we lose interest in creating simulations 22:03 – Consciousness 26:27 – Immersive worlds 28:50 – Experience machine 41:10 – Intelligence and consciousness 48:58 – Weighing probabilities of the simulation argument 1:01:43 – Elaborating on Joe Rogan conversation 1:05:53 – Doomsday argument and anthropic reasoning 1:23:02 – Elon Musk 1:25:26 – What’s outside the simulation? 1:29:52 – Superintelligence 1:47:27 – AGI utopia 1:52:41 – Meaning of life