AI Safety Fundamentals: Alignment cover image

Is Power-Seeking AI an Existential Risk?

AI Safety Fundamentals: Alignment

00:00

Low-Convergence in AI Systems

They discuss the concept of low-convergence in AI systems, giving examples of APS systems that strategically plan without seeking power. They highlight the importance of considering misalignment without power-seeking and the need for limitations in planning for APS systems. They also discuss systems with problematic objectives that don't involve power-seeking.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app