5min chapter

"Moment of Zen"  cover image

E12: Effective Accelerationism and the AI Safety Debate with Bayeslord, Beff Jezoz, and Nathan Labenz

"Moment of Zen"

CHAPTER

Is There a Risk to Scaling AI?

The amount of humans we can support in our civilization depends on how much energy we have. And so I think it's like when you start to allow that safetyism culture and in a centralized bureaucratic way, the civilizational progress is going to be vastly constrained. There's huge opportunity costs to leaving certain high potential technologies behind or decelerating their adoption because they're, yeah, there's huge upside being left on the table.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode