Tech Disruptors cover image

Scale AI and Fine-Tuning LLMs

Tech Disruptors

00:00

Training Language Models: Tagging Data and Human Expertise

This chapter discusses the process of ingesting and tagging data to train large-language models (LLMs) at scale AI. It highlights the importance of reinforcement learning with human feedback to fine-tune models and generate creative and thoughtful outputs. The chapter also explores the difference between tagging and annotating data for supervised learning and emphasizes the incorporation of human experiences in the model's responses.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app