EA Talks cover image

Big needs in longtermism and how to help with them | Holden Karnofsky | EA Global: SF 22

EA Talks

00:00

AI Alignment and Listening Late Knowledge Report

I think there's an increasing interest in interpretability. If you could kind of look inside an AI's digital brain and see what it's thinking, that could reduce a number of the risks we're worried about. You might have two AI systems taking different sides of a recommendation and between themselves, kind of like two lawyers in a courtroom, they surface everything that's going on for a human to think about and consider. So those are some examples. Do you wish there was kind of more research in this area or?

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app