Eye On A.I. cover image

#122 Connor Leahy: Unveiling the Darker Side of AI

Eye On A.I.

00:00

The Impossibility of Alignment in Systems

Cohen: We want to build systems that we're focusing basically on a simple property than alignment. So alignment would be the system knows what you want, wants to do that too and does everything has power to get you what you truly want. And then I want these systems to reason like humans. This is absurdly hilariously, impossibly hard. It's just extremely hard, especially on the first try. Cohen: The core problem of why like GPT systems are will be very dangerous is because the cognition is not human. There is no reason to believe this.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app