AI Safety Fundamentals: Alignment cover image

What Failure Looks Like

AI Safety Fundamentals: Alignment

CHAPTER

The Unrecoverable Catastrophe of Automated Systems

An unrecoverable catastrophe would probably occur during some period of heightened vulnerability. Under these conditions, influenced seeking systems stop behaving in the intended way. It's hard to really pin down the level of systemic risk and mitigation may be expensive if we don't have a good technological solution.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner