AXRP - the AI X-risk Research Podcast cover image

35 - Peter Hase on LLM Beliefs and Easy-to-Hard Generalization

AXRP - the AI X-risk Research Podcast

00:00

Navigating Supervision Gaps in AI Training

This chapter explores Reinforcement Learning from Human Feedback (RLHF++) in training models like 'CEO bot' and the calibration of expectations regarding model performance in weak supervision contexts. It emphasizes the performance gap between supervised and unsupervised learning, highlighting the methodologies for evaluating effectiveness across varying supervision levels. The discussion also delves into the complexities of language models, their latent knowledge, and the challenges of ensuring truthfulness in responses, particularly as task complexity increases.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app