

103 - GPT: How worried should we be?
Mar 23, 2023
Olle Häggström, a professor of mathematical statistics, discusses GPT, its intelligent nature, risks, and the reckless development of this technology. They explore the lack of transparency in GPT models and touch on the potential harm and safety concerns. The podcast also delves into the appropriate pace of AI development, the parallel between nuclear weapons and AI, and concerns about the timeline and readiness for AI development.
Chapters
Transcript
Episode notes
1 2 3 4 5 6 7
Introduction
00:00 • 2min
Development and Lack of Transparency in GPT Technology
02:22 • 14min
Exploring GPT's Potential and Personal Use
16:35 • 18min
The Potential Harm and Safety Concerns of GPT Technology
34:54 • 26min
Debating the Pace of AI Development and the Need for Slowing it Down
01:01:07 • 4min
The Analogy: Nuclear Weapons and AI
01:05:15 • 5min
Concerns about the Timeline and Readiness for AI Development
01:09:51 • 2min