Future of Life Institute Podcast cover image

Connor Leahy on the State of AI and Alignment Research

Future of Life Institute Podcast

CHAPTER

The Future of Alignment

I don't see any reason to expect this. My general predictions are usually predict what, if you don't know what the future is going to hold, predict that what just happened will happen again. And this is what I'm seeing. We're at the beginning of an, you know, we're now in takeoff,. You know, exponentials are happening. And will this flatten off at some point? Yeah, sure. I expect that to be post apocalypse. Do you think that reinforcement learning from human feedback could take us to something at least somewhat safe systems? No. There's no chance of this paradigm working.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner