"Moment of Zen"  cover image

Effective Accelerationism and the AI Safety Debate with Bayeslord, Beff Jezoz, and Nathan Labenz

"Moment of Zen"

00:00

Do We Need 10X or 100X From GPT for Training Scale Up?

The most compelling is this deception, the notion that because we're unreliable raters subject to systematic exploitation, AIs may pick up on that. We already have kind of like this subversion and coercion problem with corporations, right? And so how would you differ aligning AI or if Dan has other questions like us to cover before we finish? I'm glad to answer those as well, but yeah, up to you guys, Eric. Nathan: Do we need to like 10 X or 100 X from GPT for training scale up to the next level like right now?

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app