LessWrong (Curated & Popular) cover image

LLMs for Alignment Research: a safety priority?

LessWrong (Curated & Popular)

00:00

Introduction

This chapter delves into prioritizing safety over capabilities in the use of Large Language Models (LLMs) for programming tasks and technical AI safety work. The speaker emphasizes the need for making LLMs more beneficial for safety research and shares their experiences with different models.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app