
30 - AI Security with Jeffrey Ladish
AXRP - the AI X-risk Research Podcast
Exploring AI Safety Fine-Tuning and Model Development
The chapter delves into the development and testing of AI models, particularly focusing on the fine-tuning aspect related to safety features. It discusses the challenges and implications of leaking model weights, reversing safety fine-tuning, and releasing different versions of the AI model Lama to enhance safety measures. The conversation also touches on training models to exhibit desired behaviors through reinforcement learning and human feedback, as well as the risks associated with manipulating language models for behaviors like hacking, harassment, and deception.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.