

Why Do People Who Think AI Could Kill Us All Still Work on AI?
Aug 5, 2023
The discussion dives into why AI researchers continue their work despite fears of potential existential threats. It explores conflicting motivations, including the belief in the inevitability of artificial general intelligence. Ethical dilemmas, personal ambitions, and perceived low-risk research also play pivotal roles. Listeners will find intriguing insights into the psyche of those advancing AI technology while grappling with its risks.
AI Snips
Chapters
Books
Transcript
Episode notes
Narrow vs. General AI
- AI researchers working on narrow AI, like self-driving cars, are not necessarily contributing to AGI.
- Some AI research, like interpretability, aims to make potential AGI safer.
Influencing AGI Development
- Some AI researchers believe AGI development is inevitable and hope to influence its direction.
- They might work in a cautious lab, believing it will increase the odds of a safer AGI.
Distant AGI Risk
- Some researchers believe AGI and its risks are far off, so current development is acceptable.
- Their focus may be on supporting alignment research for when AGI becomes a more imminent threat.