Lex Fridman Podcast cover image

#368 – Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization

Lex Fridman Podcast

00:00

Ask Gwern about dangers of GPT-4

How can we make better predictions about GPT 4, 5, 6 and avoid being predictably wrong? Being wrong once out of ten times is okay, but being wrong in the same direction repeatedly is not. The speaker admits to previously being wrong about neural networks and is hesitant to make predictions about GPT 4, but acknowledges that the answer is not necessarily the opposite. The speaker suggests asking Gorn Brannwen, who is more knowledgeable about the topic.

Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner
Get the app