i love how we went to hit learn Stalin from 20 30 minutes ago uh gpt three generating pro doing programs synthesis. The argument was about morality of ai versus human so um and specifically in the context of writing programs that can be destructive like running nuclear power plants or autonomous weapon system for example. i'm not talking about self-directed systems that are making their own goals at the global scale if you just talk about the deployment of technological systems that are able to see order and patterns  and use this as control models they will act on the goals that we give them. If we have the correct incentives to set the correct incentives for these systems i'm quite optimistic so I think it's

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode