LessWrong (Curated & Popular) cover image

"Against Almost Every Theory of Impact of Interpretability" by Charbel-Raphaël

LessWrong (Curated & Popular)

00:00

Strategies for Extracting Knowledge from AI Models

Exploring the challenges of leveraging near GPT-3 level AIs for inner alignment, highlighting the inefficacy of using these AIs as oracles, and proposing the benefits of a chain of thought approach and the significance of agency and real-world testing in AI model development.

Play episode from 01:12:11
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app