Nick Bostrom, a philosopher and director at the Future of Humanity Institute, dives deep into the complexities of artificial intelligence. They tackle the risks associated with superintelligent AI, including alignment failures and governance issues. Bostrom explores the paradox of striving for a 'solved world' and the need for ethical considerations as technology evolves. The conversation shifts to the future of work, questioning how AI might redefine fulfillment and purpose in a life where traditional jobs become obsolete.
39:16
forum Ask episode
web_stories AI Snips
view_agenda Chapters
menu_book Books
auto_awesome Transcript
info_circle Episode notes
insights INSIGHT
Internet Connection of AI Systems
Connecting advanced AIs to the internet is useful for their development.
Current AI systems do not pose the extreme risks that would necessitate air-gapping.
insights INSIGHT
Delayed Recognition of Alignment Problem
Some highly intelligent people in AI did not initially perceive the alignment problem.
Their delayed recognition highlights the difficulty of changing minds, especially with increasing age and distinction.
insights INSIGHT
Skepticism of AI Risk
Skeptics of the AI alignment problem often dismiss concerns as religious faith.
They believe humans cannot be cognitively closed to a superintelligence's understanding.
Get the Snipd Podcast app to discover more snips from this episode
In this book, Nick Bostrom delves into the implications of creating superintelligence, which could surpass human intelligence in all domains. He discusses the potential dangers, such as the loss of human control over such powerful entities, and presents various strategies to ensure that superintelligences align with human values. The book examines the 'AI control problem' and the need to endow future machine intelligence with positive values to prevent existential risks[3][5][4].
Sam Harris speaks with Nick Bostrom about ongoing progress in artificial intelligence. They discuss the twin concerns about the failure of alignment and the failure to make progress, why smart people don’t perceive the risk of superintelligent AI, the governance risk, path dependence and "knotty problems," the idea of a solved world, Keynes’s predictions about human productivity, the uncanny valley of utopia, the replacement of human labor and other activities, meaning and purpose, digital isolation and plugging into something like the Matrix, pure hedonism, the asymmetry between pleasure and pain, increasingly subtle distinctions in experience, artificial purpose, altering human values at the level of the brain, ethical changes in the absence of extreme suffering, our cosmic endowment, longtermism, problems with consequentialism, the ethical conundrum of dealing with small probabilities of large outcomes, and other topics.
If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe.
Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.