#74
Mentioned in 99 episodes

Superintelligence

Paths, Dangers, Strategies
Book • 2014
In this book, Nick Bostrom delves into the implications of creating superintelligence, which could surpass human intelligence in all domains.

He discusses the potential dangers, such as the loss of human control over such powerful entities, and presents various strategies to ensure that superintelligences align with human values.

The book examines the 'AI control problem' and the need to endow future machine intelligence with positive values to prevent existential risks.

Mentioned by

Mentioned in 99 episodes

Mentioned by Nval Ravakant when discussing the singularity and the potential risks of advanced AI.
10,506 snips
#18 Naval Ravikant: The Angel Philosopher
Mentioned by Peter Thiel as the author of "Superintelligence", a book exploring the potential dangers of advanced AI.
1,911 snips
#2190 - Peter Thiel
Mentioned by Cal Newport as a book exploring potential dangers and strategies related to superintelligent AI.
468 snips
Ep. 226: The Productivity Dragon
Mentioned by Cal Newport in relation to Yuval Harari's op-ed on AI.
460 snips
Ep. 244: Thoughts on ChatGPT
Mentioned by Mike Israetel in the context of AI risk and the alignment problem.
294 snips
#651 - Dr Mike Israetel - Can Money Actually Buy You Happiness?
Mentioned by Cal Newport as a book he is currently reading for research purposes.
188 snips
Ep. 220: The Two Types of Ambition
Mentioned by Keach Hagey as a book that was widely read in 2014 among those interested in AGI, reflecting the cultural moment surrounding AI.
169 snips
Sam Altman, OpenAI and the Future of Artificial (General) Intelligence
Mentioned by Tim Urban as a book that significantly shaped his understanding of AI and its potential impact.
149 snips
Ep 108: Tim Urban on Superintelligence, Mars, Fermi Paradox & How to Conquer a Society
Mentioned by Speaker 1 when discussing the potential impact of superintelligence.
133 snips
How AGI Will Impact The Economy
Mentioned by Sam Harris when discussing AI safety and the range of attitudes towards the risks of AI.
128 snips
#379 — Regulating Artificial Intelligence
Mentioned by Chris Williamson as a book he read discussing superintelligence and existential risks.
122 snips
#598 - Dr Jonathan Anomaly - The Wild Ethics Of Human Genetic Enhancement
Mentioned by Sam Harris as a previous book by Nick Bostrom that raised concerns about AI alignment.
94 snips
#385 — AI Utopia
Mentioned by Max Bennett to illustrate the "paperclip conundrum", highlighting the potential dangers of misinterpreting AI instructions.
89 snips
Max Bennett (on the history of intelligence)

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app