LessWrong (Curated & Popular) cover image

"An artificially structured argument for expecting AGI ruin" by Rob Bensinger

LessWrong (Curated & Popular)

00:00

The Bottleneck on Decisive Strategic Advantages Reachable

As a strong default, AGI tech will spread widely quite quickly. Even if the first developers are cautious enough to avoid disaster, we'll likely face this issue within only a few months or years of STEM level AGIs invention. This makes government responses and AGI mediated pivotal acts far more difficult. I think the easiest pivotal acts are somewhat harder than the easiest strategies a misaligned AI could use to seize power. But looking only at capability and not alignability, I expect AGI to achieve both capabilities at around the same time,. coinciding with or following shortly after the invention of STEM levelAGI. That's into that footnote. And 3C. If many early developers can do so

Play episode from 53:22
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app