LessWrong (Curated & Popular)

Counterarguments to the basic AI x-risk case

7 snips
Nov 4, 2022
Ask episode
Chapters
Transcript
1
Introduction
00:00 • 3min
2
Arguments for AI Risk
02:53 • 3min
3
Utility Maximization: A Concept of Goal-Directedness
05:52 • 3min
4
The Evolution of Utility Maximizers in AI
08:56 • 3min
5
The Importance of Coherence in AI
11:39 • 4min
6
The Incoherence Gap in AI Systems
15:31 • 2min
7
The Importance of Utility Maximization in the AI Economy
17:40 • 4min
8
How to Align AI to Human-Like Goals
21:26 • 2min
9
The Disadvantages of AI Utility Functions
23:19 • 3min
10
The Importance of Fragile Values in AI
26:00 • 3min
11
AI and the Future of the Universe
28:39 • 3min
12
The Provenance of Human Triumph
31:28 • 4min
13
AI Systems and the Individual Intelligence Model
35:26 • 2min
14
The Importance of Sharing Technology
37:35 • 3min
15
The Importance of Functional Connection in AI Success
40:07 • 3min
16
The Disadvantages of AI Systems
42:47 • 4min
17
The Differences Between Humans With Tools and Agentic AI
46:28 • 2min
18
The Importance of Trustworthiness in AI
48:12 • 5min
19
The Future of AI
53:41 • 3min
20
The Importance of Intelligence in Decision Making
56:41 • 3min
21
The Importance of Intelligence in Goals
59:47 • 3min
22
The Importance of New Cognitive Labour in AI
01:02:48 • 3min
23
The Importance of Feedback Loops in AI Performance
01:06:02 • 2min
24
The Argument for AI Safety
01:08:08 • 3min
25
The Future of AI
01:11:09 • 4min