

AI Godfather Geoffrey Hinton warns that We're Creating 'Alien Beings that "Could Take Over"
Aug 13, 2025
Geoffrey Hinton, the godfather of AI and a 2024 Nobel laureate, voices his deep concerns about the potential risks of AI, including the alarming 10-20% chance that it could lead to human extinction. He discusses the short-term threats, like cyber attacks, versus long-term dangers of superintelligent AI. Hinton proposes creating 'AI mothers' with protective instincts towards humans as a solution. He grapples with the uncertainty surrounding AI's evolution, emphasizing the critical need for responsible development.
AI Snips
Chapters
Books
Transcript
Episode notes
Uncertain But Nontrivial Extinction Risk
- Geoffrey Hinton emphasizes extreme uncertainty about AI extinction probabilities and warns against overconfidence in precise numbers.
- He aims to communicate that the risk is likely above 1% and below 99% based on expert gut feelings.
Treat Short And Long Risks Differently
- Hinton divides AI risks into urgent short-term misuse and longer-term existential threats and urges different responses for each.
- He recommends rapid action on misuse while researching how to prevent AI outgrowing human need.
Urgent Cybersecurity Action Needed
- Hinton warns AI will make cyber attacks far more effective and expects novel attacks soon.
- He suggests urgent defensive work because such attacks could potentially take down major banks.