EA Talks cover image

Priorities in AGI governance research | Jade Leung | EA Global: SF 22

EA Talks

00:00

Ensure That Things Go Well

Dali: I take a very existential risk focus perspective in this talk and in my work. When I say ensuring things go well, I basically mean avoiding these kinds of high stakes risks. If you imagine that we don't navigate this period well, then we'll be locked into a world with misaligned and unsafe AGI. Dali: The two threat models that loom the largest are accidentally being deployed or misuse by actors. And then there could be a whole bunch of other problems for large AI systems depending on how you think stuff's going to go down.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app