LessWrong (Curated & Popular) cover image

“Do confident short timelines make sense?” by TsviBT, abramdemski

LessWrong (Curated & Popular)

00:00

Intro

This chapter features a discussion on the timeline for artificial general intelligence (AGI) and the associated existential risks. Different viewpoints are presented regarding the anticipated arrival of AGI, emphasizing the need for careful consideration of resource allocation in AI development.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app