80k After Hours cover image

Highlights: #197 – Nick Joseph on whether Anthropic’s AI safety policy is up to the task

80k After Hours

CHAPTER

Intro

This chapter explores Anthropic's responsible scaling policy, which sets safety levels and evaluations for AI models to gauge potential risks. It underscores the necessity of safety measures regarding hazardous capabilities and aligns commercial motivations with the objective of safe AI deployment.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner