Latent Space: The AI Engineer Podcast cover image

Commoditizing the Petaflop — with George Hotz of the tiny corp

Latent Space: The AI Engineer Podcast

NOTE

Improving Models: Training Time vs Size

Improving a model can be done by training it longer or making it bigger, but these options are not always interesting or necessary. Large models are beneficial for research labs, but for startups or individual use, excessive weights are not needed. Focusing on inference rather than training can yield better results, and there should be a higher ratio of inference to training in the world. Blurring the lines between inference and training is a positive development, and on-device fine-tuning is desirable. Comma is exploring parameter-efficient fine-tuning, considering factors like flat tires. Being efficient in parameters is crucial for Comma's operations in cars.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner