18min chapter

Thinking Machines: AI & Philosophy cover image

The Future is Fine Tuned (with Dev Rishi, Predibase)

Thinking Machines: AI & Philosophy

CHAPTER

Fine-Tuning Models for Optimal Performance

The chapter explores the challenges and benefits of fine-tuning models for specific tasks, discussing methods like Lorax and Lora fine-tuning to optimize performance. It delves into the implications of managing GPU resources efficiently, the competition between model providers like OpenAI, and the debate over the necessity of smaller models for different applications. The conversation also touches on the strategic direction of OpenAI in model development and the business models surrounding hosting AI models.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode