AXRP - the AI X-risk Research Podcast cover image

13 - First Principles of AGI Safety with Richard Ngo

AXRP - the AI X-risk Research Podcast

CHAPTER

Introduction

This episode includes a debate with eliezr kowski about the difficulty of ai alignment. Richard mole is a researcher at open ai, where he works on governments and forecasting. He also was a research engineer deep mind and designed a course ai safety fundamentals. We'll be discussing his report, a g i safety from first principles.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner