
13 - First Principles of AGI Safety with Richard Ngo
AXRP - the AI X-risk Research Podcast
Alignment Versus Govmant?
Aistr alined is a system that when you give it an instruction, itill follow the intention of that instruction and then come back and and, like, wait for more instruction. When i mean missalined, roughly what i mean is power seeking, a system that is trying to gain power over the world for itself. I usually think about, how big a role does, you know, solid technic problem of making aistr safe? Do you thinktat that has like, a massive roll in making our a gi safe? A minor roll? Alk d youthink, it's base the whole problem. Or like, ten % of the problem or something
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.