Future of Life Institute Podcast cover image

Ajeya Cotra on how Artificial Intelligence Could Cause Catastrophe

Future of Life Institute Podcast

CHAPTER

AI Safety Research: A Superset of AI Alignment Research

Sometimes I've heard from machine learning people that they object to this whole project of AI safety because it seems that the people who are interested in AI development could go radically wrong. So if we're talking about AI, for example, beginning to have a larger influence over the future of what happens on earth than humans do, then it seems implausible. People discount arguments that AI systems could pose risk by discounting the possibility that AI systems would ever become collectively more powerful than humans. They disagree about the possibility or plausibility of training AI systems that powerful so soon.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner