3min snip

Lex Fridman Podcast cover image

#434 – Aravind Srinivas: Perplexity CEO on Future of AI, Search & the Internet

Lex Fridman Podcast

NOTE

**Importance of RLHF in Developing AI Models pre training and post **

RLHF, or Reinforcement Learning from Human Feedback, is crucial in the development of AI models both before and after training. While often seen as an additional touch, RLHF plays a significant role in ensuring the controllability and good behavior of systems. The process involves pre-training, which focuses on raw scaling compute to build common sense in the model, and post-training, where RLHF, supervised fine-tuning, and other techniques enhance the model's performance. Without effective pre-training, the post-training phase lacks the necessary foundation to improve the model's capabilities. By integrating RLHF into the training process, models can become more intelligent and user-friendly, leading to advancements in product development and user interaction. Additionally, the RAG architecture, Retrieval Augmented Generative models, prompts a reevaluation of the importance of pre-training and efficient learning strategies, aiming for systems that learn like in an open book exam instead of brute force methods.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode