LessWrong (Curated & Popular) cover image

LLMs for Alignment Research: a safety priority?

LessWrong (Curated & Popular)

CHAPTER

Exploring the Usefulness of Programming and Philosophy for Safety Research with Modern LLMs

Exploring the role of programming and philosophy in safety research with modern LLMs, highlighting limitations and suggesting cautious AI utilization for improved safety without added risks.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner