4min snip

Latent Space: The AI Engineer Podcast — Practitioners talking LLMs, CodeGen, Agents, Multimodality, AI UX, GPU Infra and all things Software 3.0 cover image

AI Fundamentals: Datasets 101

Latent Space: The AI Engineer Podcast — Practitioners talking LLMs, CodeGen, Agents, Multimodality, AI UX, GPU Infra and all things Software 3.0

NOTE

The Role of Data Size and Training Parameters in Language Model Optimization

DeepMind created replicas of GPC3 called Gopher and Chinchilla, which matched or beat GPC3 despite being 10x smaller/nThe large number of parameters in GPC3 (175 billion) was excessive and training required a significant amount of data (3.5 trillion tokens)/nThe foundation model space is rapidly evolving, with new papers being published every 18-24 months/nThe Llama optimal approach aims for smaller models (200 times tokens per parameter) to improve inference time/nAs AI transitions from research to practical applications, inference time, cost, and memory are becoming important considerations/nThe optimal training strategy depends on the specific use case and the lifetime cost of the model/nLanguage models (LMs) can be viewed as databases and training is a way to compress datasets

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode