LessWrong (Curated & Popular) cover image

LLMs for Alignment Research: a safety priority?

LessWrong (Curated & Popular)

00:00

Exploring the Usefulness of Programming and Philosophy for Safety Research with Modern LLMs

Exploring the role of programming and philosophy in safety research with modern LLMs, highlighting limitations and suggesting cautious AI utilization for improved safety without added risks.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app