AXRP - the AI X-risk Research Podcast cover image

13 - First Principles of AGI Safety with Richard Ngo

AXRP - the AI X-risk Research Podcast

00:00

Introduction

This episode includes a debate with eliezr kowski about the difficulty of ai alignment. Richard mole is a researcher at open ai, where he works on governments and forecasting. He also was a research engineer deep mind and designed a course ai safety fundamentals. We'll be discussing his report, a g i safety from first principles.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app