80k After Hours cover image

Highlights: #197 – Nick Joseph on whether Anthropic’s AI safety policy is up to the task

80k After Hours

00:00

Intro

This chapter explores Anthropic's responsible scaling policy, which sets safety levels and evaluations for AI models to gauge potential risks. It underscores the necessity of safety measures regarding hazardous capabilities and aligns commercial motivations with the objective of safe AI deployment.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app