Future of Life Institute Podcast

Connor Leahy on the State of AI and Alignment Research

20 snips
Apr 20, 2023
Ask episode
Chapters
Transcript
Episode notes
1
Introduction
00:00 • 2min
2
The Importance of Competence in Startups
01:57 • 2min
3
The Incompetence of Technology Giants
03:50 • 2min
4
The Importance of Tactic Knowledge in AI Research
05:39 • 2min
5
The Importance of Tactic Knowledge in Chip Production
07:32 • 2min
6
Chat GPT and GPT For: The Race to AI
09:52 • 2min
7
The Bad Meme of AGI
11:50 • 2min
8
How to Predict When an AI Model Will Publish a Scientific Paper
13:32 • 2min
9
The Future of AI
15:15 • 2min
10
The Future of Alignment
17:20 • 2min
11
The Core Problem of Alignment in RLETF
18:51 • 3min
12
The Pros and Cons of R.L.H.F.
21:24 • 3min
13
The Importance of Alignment in Artificial Intelligence
24:14 • 2min
14
The Importance of Security Mindset in Super Intelligent Systems
26:34 • 3min
15
Mechanistic Interpretability: A Paradigm of AI Safety
29:11 • 3min
16
The Limits of Interpretability Research
31:44 • 2min
17
Paul Cristiano and Elia Isyukowski's Research on Alignment
33:38 • 4min
18
The Importance of Deconfusing Agency and Alignment
37:50 • 2min
19
The Future of Computing
40:08 • 3min
20
Will Public Attention to AI Make AI Safer?
43:02 • 3min
21
The Counterproductive Overreactions of OpenAI to AI Safety Research
45:36 • 2min
22
AI as a Military Technology?
48:05 • 4min