LessWrong (Curated & Popular) cover image

"LoRA Fine-tuning Efficiently Undoes Safety Training from Llama 2-Chat 70B" by Simon Lermen & Jeffrey Ladish.

LessWrong (Curated & Popular)

00:00

Introduction

This chapter explores the effectiveness of safety procedures and presents a method of adversarially fine-tuning models to reduce the generation of harmful content. It discusses the dangers of releasing model weights and emphasizes the benefits of publicly releasing them for alignment research.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app