#2017
Mentioned in 15 episodes
AI 2027
None
Book • 2025
Mentioned by
























Mentioned in 15 episodes
Mentioned by
Ben Mann as forecasting that a recursive self-improvement loop will lead to superhuman AI by 2028.


614 snips
Will we have Superintelligence by 2028? With Anthropic’s Ben Mann
Erwähnt von den Moderatoren, als sie über die Szenarien diskutierten, in denen KI Diplomatie einsetzen könnte.

121 snips
OpenAI greift Msft Office / Google Workspace an & Digitale Bücherverbrennung bei xAI #469
Mentioned by
Chris Best as a near future science fiction story about the US-China race pulling us into AGI.


111 snips
What Replaces Twitter? With Noah Smith & Chris Best, CEO of Substack
Mentioned when Daniel explains that AI-2027 illustrates a situation where all the important decisions are being taken behind closed doors.

95 snips
Why the AI Race Ends in Disaster (with Daniel Kokotajlo)
Mentioned as a specific scenario at this level of detail that represents Daniel's best guess.

95 snips
Why the AI Race Ends in Disaster (with Daniel Kokotajlo)
Mentioned by
Nathan Labenz , who announced Daniel Cocotello's $100,000 donation to support the AI Village.


91 snips
The AI Village: Previewing the Giga-Agent Future with Adam Binksmith, Founder of AI Digest
Mentioned by
Daniel Kokotajlo as a forecast of where AI development might take us in the near future.


60 snips
Daniel Kokotajlo Forecasts the End of Human Dominance
Mentioned by Paul Smith as a report that has caused a stir in Silicon Valley, describing a fictional scenario about AI surpassing human intelligence.

33 snips
Apocalypse or a four-day week? What AI might mean for you
Mentioned by
Chris Best as a near future science fiction story about the next couple of years and how the US-China race pulls us into AGI.


17 snips
The Future of Media with Noah Smith and Chris Best, CEO of Substack
Menzionato durante la discussione su un articolo del New York Times.

11 snips
Digitalia #768 - Retrogaming per antichi Romani
Mentioned by the speaker when discussing the plausibility of AGI being developed before the end of the decade.

“Training AGI in Secret would be Unsafe and Unethical” by Daniel Kokotajlo

Carl Feynman, AI Engineer & Son of Richard Feynman, Says Building AGI Likely Means Human EXTINCTION!