AI Safety Fundamentals: Alignment cover image

Is Power-Seeking AI an Existential Risk?

AI Safety Fundamentals: Alignment

00:00

Low-Convergence in AI Systems

They discuss the concept of low-convergence in AI systems, giving examples of APS systems that strategically plan without seeking power. They highlight the importance of considering misalignment without power-seeking and the need for limitations in planning for APS systems. They also discuss systems with problematic objectives that don't involve power-seeking.

Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner