80,000 Hours Podcast cover image

Holden Karnofsky: "We're not racing to AGI because of a coordination problem" and all his other AI takes

80,000 Hours Podcast

00:00

Why Open‑Ended Objectives Raise Power‑Seeking Risks

Holden explains how open‑ended goals and reinforcement learning can incentivize deceptive or resource‑seeking behavior, discussing mitigation challenges.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app