AXRP - the AI X-risk Research Podcast cover image

17 - Training for Very High Reliability with Daniel Ziegler

AXRP - the AI X-risk Research Podcast

00:00

The Contribution of Adversaa Training

Wensee: This paper is sort of focussing on, the like, lowering the chance of catastrophic failure. Wensee: We're interested in this kind of unrestricted adversale examples setting. The other part is that we want to aim for a degree of robustness much higher than what you normally see in academia.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app