Lex Fridman Podcast cover image

#434 – Aravind Srinivas: Perplexity CEO on Future of AI, Search & the Internet

Lex Fridman Podcast

00:00

*Importance of RLHF in Developing AI Models pre training and post *

RLHF, or Reinforcement Learning from Human Feedback, is crucial in the development of AI models both before and after training. While often seen as an additional touch, RLHF plays a significant role in ensuring the controllability and good behavior of systems. The process involves pre-training, which focuses on raw scaling compute to build common sense in the model, and post-training, where RLHF, supervised fine-tuning, and other techniques enhance the model's performance. Without effective pre-training, the post-training phase lacks the necessary foundation to improve the model's capabilities. By integrating RLHF into the training process, models can become more intelligent and user-friendly, leading to advancements in product development and user interaction. Additionally, the RAG architecture, Retrieval Augmented Generative models, prompts a reevaluation of the importance of pre-training and efficient learning strategies, aiming for systems that learn like in an open book exam instead of brute force methods.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app