AXRP - the AI X-risk Research Podcast cover image

20 - 'Reform' AI Alignment with Scott Aaronson

AXRP - the AI X-risk Research Podcast

00:00

The Future of AI Alignment

OpenAI approached me last spring with the proposal to take off a year and think about, you know, the foundations of AI safety for us. I was very skeptical at first, why on earth do you want me, right? I'm a quantum computing theorist; there are people who are so much more knowledgeable about about AI than I am. So now, after 20 years out of AI, I'm sort of dipping my foot back into it.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app