Black Box cover image

Shut it down?

Black Box

00:00

Limits of Super Intelligence and Existential Risks

The ultimate doomerism argument involves an element of magical thinking, like the idea of engineering a nanovirus and AI death bots, which is not feasible based on current knowledge of biology and physics. Pure reason alone has limits in achieving certain outcomes. There is a concern about voluntarily relinquishing power to AI systems, where individuals and organizations could become subservient to AI for competitive advantage, leading to a scenario where all significant power is ceded to AI systems.

Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner
Get the app