

#74
Mentioned in 99 episodes
Superintelligence
Paths, Dangers, Strategies
Book • 2014
In this book, Nick Bostrom delves into the implications of creating superintelligence, which could surpass human intelligence in all domains.
He discusses the potential dangers, such as the loss of human control over such powerful entities, and presents various strategies to ensure that superintelligences align with human values.
The book examines the 'AI control problem' and the need to endow future machine intelligence with positive values to prevent existential risks.
He discusses the potential dangers, such as the loss of human control over such powerful entities, and presents various strategies to ensure that superintelligences align with human values.
The book examines the 'AI control problem' and the need to endow future machine intelligence with positive values to prevent existential risks.
Mentioned by





























Mentioned in 99 episodes
Mentioned by Nval Ravakant when discussing the singularity and the potential risks of advanced AI.

10,506 snips
#18 Naval Ravikant: The Angel Philosopher
Mentioned by
Peter Thiel as the author of "Superintelligence", a book exploring the potential dangers of advanced AI.


1,911 snips
#2190 - Peter Thiel
Mentioned by
Cal Newport as a book exploring potential dangers and strategies related to superintelligent AI.


468 snips
Ep. 226: The Productivity Dragon
Mentioned by
Keach Hagey as a book that was widely read in 2014 among those interested in AGI, reflecting the cultural moment surrounding AI.


169 snips
Sam Altman, OpenAI and the Future of Artificial (General) Intelligence
Mentioned by
Tim Urban as a book that significantly shaped his understanding of AI and its potential impact.


149 snips
Ep 108: Tim Urban on Superintelligence, Mars, Fermi Paradox & How to Conquer a Society
Mentioned by Speaker 1 when discussing the potential impact of superintelligence.

133 snips
How AGI Will Impact The Economy
Mentioned by
Sam Harris when discussing AI safety and the range of attitudes towards the risks of AI.


128 snips
#379 — Regulating Artificial Intelligence
Aanbevolen door
Wietse Hage als een goede introductie tot superintelligentie voor het grote publiek.


107 snips
Zo gebruik je de nieuwe ChatGPT agents + AI superintelligentie nadert, wereldmachten vechten om controle | Poki S03E19
Mentioned by Max Bennett to illustrate the "paperclip conundrum", highlighting the potential dangers of misinterpreting AI instructions.

89 snips
Max Bennett (on the history of intelligence)