Donnie Greenberg, Co-founder and CEO of Runhouse and former product lead for PyTorch at Meta, shares insights on simplifying machine learning infrastructure. He discusses the challenges of traditional MLOps tools and presents Runhouse's serverless approach that reduces complexity in moving from development to production. Greenberg emphasizes the importance of flexible, collaborative environments and innovative fault tolerance in ML workflows. He also touches on the need for integration with existing DevOps practices to meet the evolving demands of AI and ML.
Read more
AI Summary
AI Chapters
Episode notes
auto_awesome
Podcast summary created with Snipd AI
Quick takeaways
The evolution of ML infrastructure emphasizes the need for unopinionated tools that allow teams to choose their preferred methods and resources.
Organizations must adapt and integrate ML operations with traditional data engineering practices to address the unique challenges of modern machine learning workflows.
Democratizing access to sophisticated ML infrastructure fosters collaboration between teams, ultimately enhancing operational efficiencies and enabling more effective use of first-party data.
Deep dives
Understanding AI and ML Infrastructure Trends
The landscape of machine learning (ML) and artificial intelligence (AI) infrastructure has evolved significantly, with different waves reflecting changing needs and technologies. Initially, infrastructure solutions appeared opinionated, mimicking certain existing practices without considering the unique requirements of AI workflows. Over time, a more mature understanding emerged, emphasizing the necessity of handling large-scale data and computations that individual devices cannot manage. This shift led to the adoption of platforms that can efficiently orchestrate tasks across multiple compute environments, accommodating the diverse needs of modern AI systems.
Diverse Requirements in ML Workflows
The distinction between AI and ML becomes apparent when examining how organizations mobilize data for strategic decision-making versus product improvement. ML often involves optimizing and retraining models based on first-party data to enhance products, creating a need for robust infrastructure that supports frequent updates and iterations. As companies increasingly adopt transformer models for their AI applications, understanding the balance between traditional ML methods and AI innovations becomes crucial. This evolution requires not only technical adjustments but also an organizational shift to accommodate the complexity of employing both paradigms effectively.
Challenges in MLOps and Data Engineering
The rise of MLOps has highlighted significant distinctions between ML infrastructure and conventional data engineering practices. While ML necessitates continuous model training with higher fault tolerance and diverse workflows, traditional data processes prioritize standardized operations and less frequent changes. The demands of ML often lead to a disconnect as teams struggle to adapt existing tools previously designed for BI systems to the rapidly evolving requirements of machine learning. This fragmentation underscores the need for solutions that integrate seamlessly within an organization’s established data platforms while addressing the unique challenges posed by ML.
The Need for Unopinionated ML Solutions
An emphasis on reducing opinionation in ML infrastructure has emerged, fostering an environment where teams can freely choose their preferred tools and methods. This flexibility allows teams to leverage existing compute resources without being constrained by rigid frameworks that dictate how their workflows should be structured. By creating an unopinionated platform that caters to diverse existing tools and infrastructures, organizations can achieve better performance and scalability. As teams seek out sophisticated solutions that eliminate the frustrations of translating research activities into production-ready systems, such adaptability will become increasingly valuable.
Future Trends and Directions in ML and AI
The future of ML and AI hinges on organizations' ability to democratize access to sophisticated infrastructure, allowing more practitioners to engage with these technologies. As initial excitement around generative AI evolves, there is a renewed focus on mobilizing first-party data and refining ML processes within organizations. This transition emphasizes the necessity for extensive collaboration between data engineering and ML teams, recognizing that enhanced integration can lead to overall operational improvements. By driving changes that embrace shared services and reduce the complexity around ML workflows, businesses can unlock previously inaccessible opportunities and efficiencies.
Summary Machine learning workflows have long been complex and difficult to operationalize. They are often characterized by a period of research, resulting in an artifact that gets passed to another engineer or team to prepare for running in production. The MLOps category of tools have tried to build a new set of utilities to reduce that friction, but have instead introduced a new barrier at the team and organizational level. Donny Greenberg took the lessons that he learned on the PyTorch team at Meta and created Runhouse. In this episode he explains how, by reducing the number of opinions in the framework, he has also reduced the complexity of moving from development to production for ML systems.
Announcements
Hello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systems
Your host is Tobias Macey and today I'm interviewing Donny Greenberg about Runhouse and the current state of ML infrastructure
Interview
Introduction
How did you get involved in machine learning?
What are the core elements of infrastructure for ML and AI?
How has that changed over the past ~5 years?
For the past few years the MLOps and data engineering stacks were built and managed separately. How does the current generation of tools and product requirements influence the present and future approach to those domains?
There are numerous projects that aim to bridge the complexity gap in running Python and ML code from your laptop up to distributed compute on clouds (e.g. Ray, Metaflow, Dask, Modin, etc.). How do you view the decision process for teams trying to understand which tool(s) to use for managing their ML/AI developer experience?
Can you describe what Runhouse is and the story behind it?
What are the core problems that you are working to solve?
What are the main personas that you are focusing on? (e.g. data scientists, DevOps, data engineers, etc.)
How does Runhouse factor into collaboration across skill sets and teams?
Can you describe how Runhouse is implemented?
How has the focus on developer experience informed the way that you think about the features and interfaces that you include in Runhouse?
How do you think about the role of Runhouse in the integration with the AI/ML and data ecosystem?
What does the workflow look like for someone building with Runhouse?
What is involved in managing the coordination of compute and data locality to reduce networking costs and latencies?
What are the most interesting, innovative, or unexpected ways that you have seen Runhouse used?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on Runhouse?
When is Runhouse the wrong choice?
What do you have planned for the future of Runhouse?
What is your vision for the future of infrastructure and developer experience in ML/AI?
From your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?
Closing Announcements
Thank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you've learned something or tried out a project from the show then tell us about it! Email hosts@aiengineeringpodcast.com with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers.