London Futurists cover image

The AI suicide race, with Jaan Tallinn

London Futurists

00:00

How to Persuade People of the Timeline of AI

There is this discussion about can we get safety from uncertainty? It feels that we can but not really. In order to be strategically consistent in an uncertain situation you need to also take into account how much time do you need to prepare? Is it better to be early than late? But I guess they would argue that they're not uncertain. Jan Lecun would say actually large language models are an off-ramp on the road to AGI, they're not going to get there and he's really confident of that so he would say he's not uncertain. At that point you need basically some kind of like expert discussion and I expect January Lecun can help to lose that discussion

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app