
13 - First Principles of AGI Safety with Richard Ngo
AXRP - the AI X-risk Research Podcast
How to Automate Alignment Research
i don't think we have particularly strong candidates t now, for ways in which you can use an ag to prevent scaling up to dangerous regimes. i feel uncertain about how like difficult or extreme governants interventions would need to be in order to actually get the world to think, he let's slow down a bit. Let's belike, much more careful. But it still feels plausible that pivotor action is a little bit ofa misnome. As the world becomes more sort of wakes up to the scale and scope of the problem.
How should we think about artificial general intelligence (AGI), and the risks it might pose? What constraints exist on technical solutions to the problem of aligning superhuman AI systems with human intentions? In this episode, I talk to Richard Ngo about his report analyzing AGI safety from first principles, and recent conversations he had with Eliezer Yudkowsky about the difficulty of AI alignment.
Topics we discuss, and timestamps:
- 00:00:40 - The nature of intelligence and AGI
- 00:01:18 - The nature of intelligence
- 00:06:09 - AGI: what and how
- 00:13:30 - Single vs collective AI minds
- 00:18:57 - AGI in practice
- 00:18:57 - Impact
- 00:20:49 - Timing
- 00:25:38 - Creation
- 00:28:45 - Risks and benefits
- 00:35:54 - Making AGI safe
- 00:35:54 - Robustness of the agency abstraction
- 00:43:15 - Pivotal acts
- 00:50:05 - AGI safety concepts
- 00:50:05 - Alignment
- 00:56:14 - Transparency
- 00:59:25 - Cooperation
- 01:01:40 - Optima and selection processes
- 01:13:33 - The AI alignment research community
- 01:13:33 - Updates from the Yudkowsky conversation
- 01:17:18 - Corrections to the community
- 01:23:57 - Why others don't join
- 01:26:38 - Richard Ngo as a researcher
- 01:28:26 - The world approaching AGI
- 01:30:41 - Following Richard's work
The transcript: axrp.net/episode/2022/03/31/episode-13-first-principles-agi-safety-richard-ngo.html
Richard on the Alignment Forum: alignmentforum.org/users/ricraz
Richard on Twitter: twitter.com/RichardMCNgo
The AGI Safety Fundamentals course: eacambridge.org/agi-safety-fundamentals
Materials that we mention:
- AGI Safety from First Principles: alignmentforum.org/s/mzgtmmTKKn5MuCzFJ
- Conversations with Eliezer Yudkowsky: alignmentforum.org/s/n945eovrA3oDueqtq
- The Bitter Lesson: incompleteideas.net/IncIdeas/BitterLesson.html
- Metaphors We Live By: en.wikipedia.org/wiki/Metaphors_We_Live_By
- The Enigma of Reason: hup.harvard.edu/catalog.php?isbn=9780674237827
- Draft report on AI timelines, by Ajeya Cotra: alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines
- More is Different for AI: bounded-regret.ghost.io/more-is-different-for-ai/
- The Windfall Clause: fhi.ox.ac.uk/windfallclause
- Cooperative Inverse Reinforcement Learning: arxiv.org/abs/1606.03137
- Imitative Generalisation: alignmentforum.org/posts/JKj5Krff5oKMb8TjT/imitative-generalisation-aka-learning-the-prior-1
- Eliciting Latent Knowledge: docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit
- Draft report on existential risk from power-seeking AI, by Joseph Carlsmith: alignmentforum.org/posts/HduCjmXTBD4xYTegv/draft-report-on-existential-risk-from-power-seeking-ai
- The Most Important Century: cold-takes.com/most-important-century