Future of Life Institute Podcast cover image

Roman Yampolskiy on Objections to AI Safety

Future of Life Institute Podcast

00:00

AI Safety in the Machine Learning Community

You like to see more ambitious and larger theories being published where the claim is that this is actually a way of aligning super intelligence. I remember maybe even before my times Minsky published a paper showing that there are strong limitations to neural networks perceptron can never recognize certain shapes. Maybe something similar would not be the worst thing if you can show okay this is definitely not possible safety cannot be achieved using transformer architecture. evolutionary algorithms don't appear much safer uploads don't seem much safer but I would like to have time to look at that.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app