Future of Life Institute Podcast cover image

Roman Yampolskiy on Objections to AI Safety

Future of Life Institute Podcast

CHAPTER

The Exponent Curve of AI Accidents

The number of devices increased and which different smart programs are running obviously we're gonna have more exposure more users more impact in terms of when it happens what we see. We had the same exponential curve Kurzweil talks about in terms of benefits we had it with problems examples like the earliest examples were false alarms for nuclear response where it was a human in a loop who was like no no no we're not deploying based on this alarm so that was good they stopped it but it was already somewhat significant. If you design an airway i to do x it will fail to x so no later that's just what happens. The pattern was if you go general it can fail in all those ways

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode