AXRP - the AI X-risk Research Podcast cover image

30 - AI Security with Jeffrey Ladish

AXRP - the AI X-risk Research Podcast

00:00

Exploring AI Safety Fine-Tuning and Model Development

The chapter delves into the development and testing of AI models, particularly focusing on the fine-tuning aspect related to safety features. It discusses the challenges and implications of leaking model weights, reversing safety fine-tuning, and releasing different versions of the AI model Lama to enhance safety measures. The conversation also touches on training models to exhibit desired behaviors through reinforcement learning and human feedback, as well as the risks associated with manipulating language models for behaviors like hacking, harassment, and deception.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app