Future of Life Institute Podcast cover image

Connor Leahy on the State of AI and Alignment Research

Future of Life Institute Podcast

CHAPTER

The Importance of Alignment in Artificial Intelligence

I do not expect that if I had an unaligned existential risk AGI and I did the chat GPT equivalent to it, that saves you. That gives you nothing. No part of this addresses this problem. It's like you can play shell games with where the difficulty is until you confuse yourself sufficiently to thinking it's fine. This is a very common thing. How does the science all the time? Especially in cryptography, there's the saying everyone can create a code complex enough that they themselves can break it. And this is a similar thing here where everybody can create a alignment scheme sufficiently complicated that they themselves think it's safe. But so what? Like, if you just confuse where

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner