

Connor Leahy on the State of AI and Alignment Research
20 snips Apr 20, 2023
Chapters
Transcript
Episode notes
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
Introduction
00:00 • 2min
The Importance of Competence in Startups
01:57 • 2min
The Incompetence of Technology Giants
03:50 • 2min
The Importance of Tactic Knowledge in AI Research
05:39 • 2min
The Importance of Tactic Knowledge in Chip Production
07:32 • 2min
Chat GPT and GPT For: The Race to AI
09:52 • 2min
The Bad Meme of AGI
11:50 • 2min
How to Predict When an AI Model Will Publish a Scientific Paper
13:32 • 2min
The Future of AI
15:15 • 2min
The Future of Alignment
17:20 • 2min
The Core Problem of Alignment in RLETF
18:51 • 3min
The Pros and Cons of R.L.H.F.
21:24 • 3min
The Importance of Alignment in Artificial Intelligence
24:14 • 2min
The Importance of Security Mindset in Super Intelligent Systems
26:34 • 3min
Mechanistic Interpretability: A Paradigm of AI Safety
29:11 • 3min
The Limits of Interpretability Research
31:44 • 2min
Paul Cristiano and Elia Isyukowski's Research on Alignment
33:38 • 4min
The Importance of Deconfusing Agency and Alignment
37:50 • 2min
The Future of Computing
40:08 • 3min
Will Public Attention to AI Make AI Safer?
43:02 • 3min
The Counterproductive Overreactions of OpenAI to AI Safety Research
45:36 • 2min
AI as a Military Technology?
48:05 • 4min