LessWrong (Curated & Popular) cover image

Many arguments for AI x-risk are wrong

LessWrong (Curated & Popular)

00:00

Challenging Arguments on AI Existential Risks

Challenging the narrative on AI existential risks by scrutinizing flawed arguments and advocating for evidence-based evaluations, with a focus on debunking fears related to large language models and addressing an influential post on AGI.

Play episode from 05:39
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app