My current estimate for, for kind of life ending disaster is basically one to 50% per generation of like 10 next thing of compute that's being thrown at these experiments. At some point, we're going to run out of compute, because there's only so many so many 10 next things you can do. So I think it would have been unreasonable for me to be less than 1% confident inLess than 1% do from GPT four. It was an example of a, you know, an inverse scaling law where the behavior is getting worse with bigger models. And then all of a sudden with GPT four, that problem is totally fixed and there is no hindsight bias.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode