AXRP - the AI X-risk Research Podcast cover image

20 - 'Reform' AI Alignment with Scott Aaronson

AXRP - the AI X-risk Research Podcast

00:00

The Irony of AI Safety

"I knew it was going to be an extremely exciting year for AI," he says. "We still don't have a mathematical theory, but we can at least formulate theories and see which ones are useful" He adds that Ellie Ezer has switched positions about the value of AI safety research. 'He spent decades saying that everyone should be working on it is, you know, the most important thing in the world'

Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner