
"LoRA Fine-tuning Efficiently Undoes Safety Training from Llama 2-Chat 70B" by Simon Lermen & Jeffrey Ladish.
LessWrong (Curated & Popular)
00:00
Introduction
This chapter explores the effectiveness of safety procedures and presents a method of adversarially fine-tuning models to reduce the generation of harmful content. It discusses the dangers of releasing model weights and emphasizes the benefits of publicly releasing them for alignment research.
Transcript
Play full episode