In this discussion, Steven Byrnes, an author and AI researcher, dives into the provocative ideas surrounding AI's potential explosive growth. He elaborates on the concept of ‘foom’, where AI could quickly transition from basic capabilities to superintelligence, triggered by simple setups in unlikely environments. Byrnes critiques current safety perceptions and highlights radical perspectives on AI development. He also addresses strategic risks, including the dangers of unaligned AI and the importance of proactive safety measures to mitigate potential disasters.
58:46
forum Ask episode
web_stories AI Snips
view_agenda Chapters
auto_awesome Transcript
info_circle Episode notes
insights INSIGHT
Support for Fast 'Foom' Scenario
Steven Byrnes supports a fast 'foom' scenario where a small team rapidly creates ASI with little compute via a new AI paradigm.
He disagrees strongly with mainstream AI safety researchers who mostly dismiss this rapid takeoff possibility.
insights INSIGHT
ASI From Different Paradigm Not LLMs
Byrnes rejects the idea that LLMs will scale smoothly to ASI.
He expects ASI to emerge from a different, brain-like AI paradigm rather than incremental progress from LLMs.
insights INSIGHT
Human Brain as AI Existence Proof
The human brain offers an existence proof of a simple-ish core AI intelligence yet to be discovered.
Current LLM AI models lack this core intelligence, explaining why brain-like AGI remains elusive.
Get the Snipd Podcast app to discover more snips from this episode
This is a two-post series on AI “foom” (this post) and “doom” (next post).
A decade or two ago, it was pretty common to discuss “foom & doom” scenarios, as advocated especially by Eliezer Yudkowsky. In a typical such scenario, a small team would build a system that would rocket (“foom”) from “unimpressive” to “Artificial Superintelligence” (ASI) within a very short time window (days, weeks, maybe months), involving very little compute (e.g. “brain in a box in a basement”), via recursive self-improvement. Absent some future technical breakthrough, the ASI would definitely be egregiously misaligned, without the slightest intrinsic interest in whether humans live or die. The ASI would be born into a world generally much like today's, a world utterly unprepared for this new mega-mind. The extinction of humans (and every other species) would rapidly follow (“doom”). The ASI would then spend [...]
---
Outline:
(00:11) 1.1 Series summary and Table of Contents
(02:35) 1.1.2 Should I stop reading if I expect LLMs to scale to ASI?
(04:50) 1.2 Post summary and Table of Contents
(07:40) 1.3 A far-more-powerful, yet-to-be-discovered, simple(ish) core of intelligence
(10:08) 1.3.1 Existence proof: the human cortex
(12:13) 1.3.2 Three increasingly-radical perspectives on what AI capability acquisition will look like
(14:18) 1.4 Counter-arguments to there being a far-more-powerful future AI paradigm, and my responses
(14:26) 1.4.1 Possible counter: If a different, much more powerful, AI paradigm existed, then someone would have already found it.
(16:33) 1.4.2 Possible counter: But LLMs will have already reached ASI before any other paradigm can even put its shoes on
(17:14) 1.4.3 Possible counter: If ASI will be part of a different paradigm, who cares? It's just gonna be a different flavor of ML.
(17:49) 1.4.4 Possible counter: If ASI will be part of a different paradigm, the new paradigm will be discovered by LLM agents, not humans, so this is just part of the continuous 'AIs-doing-AI-R&D' story like I've been saying
(18:54) 1.5 Training compute requirements: Frighteningly little
(20:34) 1.6 Downstream consequences of new paradigm with frighteningly little training compute
(20:42) 1.6.1 I'm broadly pessimistic about existing efforts to delay AGI
(23:18) 1.6.2 I'm broadly pessimistic about existing efforts towards regulating AGI
(24:09) 1.6.3 I expect that, almost as soon as we have AGI at all, we will have AGI that could survive indefinitely without humans
(25:46) 1.7 Very little R&D separating seemingly irrelevant from ASI
(26:34) 1.7.1 For a non-imitation-learning paradigm, getting to relevant at all is only slightly easier than getting to superintelligence
(31:05) 1.7.2 Plenty of room at the top
(31:47) 1.7.3 What's the rate-limiter?
(33:22) 1.8 Downstream consequences of very little R&D separating 'seemingly irrelevant' from 'ASI'
(33:30) 1.8.1 Very sharp takeoff in wall-clock time
(35:34) 1.8.1.1 But what about training time?
(36:26) 1.8.1.2 But what if we try to make takeoff smoother?
(37:18) 1.8.2 Sharp takeoff even without recursive self-improvement
(38:22) 1.8.2.1 ...But recursive self-improvement could also happen
(40:12) 1.8.3 Next-paradigm AI probably won't be deployed at all, and ASI will probably show up in a world not wildly different from today's
(42:55) 1.8.4 We better sort out technical alignment, sandbox test protocols, etc., before the new paradigm seems even relevant at all, let alone scary
(43:40) 1.8.5 AI-assisted alignment research seems pretty doomed