.jpeg&w=320&h=320&output=jpg)
Shut it down?
Black Box
Limits of Super Intelligence and Existential Risks
The ultimate doomerism argument involves an element of magical thinking, like the idea of engineering a nanovirus and AI death bots, which is not feasible based on current knowledge of biology and physics. Pure reason alone has limits in achieving certain outcomes. There is a concern about voluntarily relinquishing power to AI systems, where individuals and organizations could become subservient to AI for competitive advantage, leading to a scenario where all significant power is ceded to AI systems.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.