
Unsupervised Learning
Ep 47: Chief AI Scientist of Databricks Jonathan Frankle on Why New Model Architectures are Unlikely, When to Pre-Train or Fine Tune, and Hopes for Future AI Policy
Nov 12, 2024
Jonathan Frankle, Chief AI Scientist at Databricks, brings deep insight into the fast-paced world of AI. He discusses the evolution of AI models, favoring transformers over LSTMs, and shares strategic insights from the merger of Mosaic and Databricks. Frankle emphasizes the importance of effective AI evaluation benchmarks and customer collaboration in developing AI solutions. Ethical considerations and responsible AI policy also take center stage, as he highlights the need for transparency and community engagement in the rapidly evolving landscape.
01:04:25
Episode guests
AI Summary
AI Chapters
Episode notes
Podcast summary created with Snipd AI
Quick takeaways
- Jonathan Frankle highlights the necessity for companies to experiment with different AI approaches, starting with simple tasks before significant investments.
- Robust evaluation methods are crucial, with Frankle emphasizing the importance of real-world benchmarks to effectively gauge AI performance.
Deep dives
Navigating AI Model Implementation
Enterprises often face challenges in deciding when to train their own AI models versus leveraging existing ones. Jonathan Frankel emphasizes the importance of keeping options open, allowing for experimentation with various approaches such as prompt engineering or full-scale pre-training. He highlights that starting small is essential, suggesting companies begin with simple AI tasks to gauge effectiveness before making significant investments. This iterative approach enables organizations to better understand their needs and the potential value of AI solutions.
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.