Future of Life Institute Podcast cover image

Roman Yampolskiy on Objections to AI Safety

Future of Life Institute Podcast

CHAPTER

AI Safety in the Machine Learning Community

You like to see more ambitious and larger theories being published where the claim is that this is actually a way of aligning super intelligence. I remember maybe even before my times Minsky published a paper showing that there are strong limitations to neural networks perceptron can never recognize certain shapes. Maybe something similar would not be the worst thing if you can show okay this is definitely not possible safety cannot be achieved using transformer architecture. evolutionary algorithms don't appear much safer uploads don't seem much safer but I would like to have time to look at that.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode