MLOps.community  cover image

MLOps.community

Latest episodes

undefined
Oct 1, 2021 • 55min

The Future of ML and Data Platforms // Michael Del Balso - Erik Bernhardsson // Coffee Sessions #57

Coffee Sessions #57 with Michael Del Balso and Erik Bernhardsson, The Future of ML and Data Platforms. // Abstract Machine learning, data analytics, and software engineering are converging as data-intensive systems become more ubiquitous.  Erik Bernhardsson, ex-CTO at Better and former Spotify machine learning lead, and Mike Del Balso, CEO at Tecton and former Uber machine learning lead and co-creator of Michelangelo sit down to chat with us today.    These two jammed with us about building machine learning platform systems and teams, the modern operational data stack and how it allows more machine learning applications to thrive, and how to successfully take advantage of data in the process of building products and companies. // Bio Michael Del Balso Mike is the co-founder of Tecton, where he is focused on building next-generation data infrastructure for Operational ML. Before Tecton, Mike was the PM lead for the Uber Michelangelo ML platform. He was also a product manager at Google where he managed the core ML systems that power Google’s Search Ads business. Previous to that, he worked on Google Maps. He holds a BSc in Electrical and Computer Engineering summa cum laude from the University of Toronto. Erik Bernhardsson Erik is currently working on some crazy data stuff since early 2021 but previously spent 6 years as the CTO of Better.com, growing the tech team from 1 to 300. Before Better, Erik spent 6 years at Spotify, building the music recommendation system and managing a team focused on machine learning. // Relevant Links Building a Data Team at a Mid-stage Startup: A Short Story https://erikbern.com/2021/07/07/the-data-team-a-short-story.html --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Vishnu on LinkedIn: https://www.linkedin.com/in/vrachakonda/ Connect with Mike on LinkedIn: https://www.linkedin.com/in/michaeldelbalso/ Connect with Erik on LinkedIn: https://www.linkedin.com/in/erikbern Timestamps: [01:12] Introduction to Michael Del Balso and Erik Bernhardsson [03:23] High-level space in data [07:25] Complexity in the data world [09:13] Data lake + data bricks [15:20] Platform strategy [16:05] "Platform is when the economic value of everybody that uses this exceeds the value of the company that creates it." - Bill Gates [18:17] Centralizing platforms [21:06] Team spin up centralization or decentralization [27:18] Manifestations of being too far from a centralized and decentralized platform [29:24] Centralized vs Decentralized [33:33] Platform value and appropriate sizing [35:43] Building a Data Team at a Mid-stage Startup: A Short Story blog post by Erik Bernhardsson [38:51] Machine Learning as a sub-problem of Data [42:16] Operational ML [46:30] Spotify recommendations [47:13] Real-time data flows at Spotify [49:40] Data stack, Machine Learning stack, and Back-end stack reusability [51:40] Container management
undefined
Sep 27, 2021 • 52min

A Few Learnings from Building a Bootstrapped MLOps Services Startup //Soumanta Das// Coffee Sessions #56

Soumanta wouldn't claim they've reached where they want to and they're still learning, so he's happy sharing successes as well as failures at Yugen.ai. // Abstract Determining Minimum Achievable Goals helps Yugen.ai ensure a significant amount of focus on value-added and impact before diving deep into solutions & building ML Systems. In this episode, Soumanta discusses Balancing ML Development vs Ops and Monitoring efforts while scaling plus their focus on improvements in small sprints. Soumanta wouldn't claim they've reached where they want to and they're still learning, so he's happy sharing successes as well as failures at Yugen.ai. // Bio Soumanta is a Co-founder at Yugen.ai, an early-stage startup in the Data Science and MLOps space. We imagine the future to be shaped by the convergence and simultaneous adoption of Algorithms, Engineering and Ops, and Responsible AI. Our mission is to help effectuate and expedite the same for our client partners by creating large-scale, reliable, and personalized ML Systems. // Relevant Links A blog Soumanta wrote when Yugen turned one https://medium.com/swlh/yugen-ai-turns-one-1089f3bf169 Presentation, ML REPA 2021 Title of the Talk - Reducing the distance between Prototyping and Production, Why obsessing over experimentation and iteration compounds ROIs Slides - https://drive.google.com/file/d/1J9Cv6IPPkGpOTq8Xl_AQCKaR0-pKMUmA/view?usp=sharing   Video - https://youtu.be/4PEbgQTw1W0 --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Vishnu on LinkedIn: https://www.linkedin.com/in/vrachakonda/ Connect with Soumanta on LinkedIn: www.linkedin.com/in/soumanta-das/ Timestamps: [00:00] Introduction to Soumanta Das [00:24] What's Yugen.ai's name all about? [02:02] Starting during the pandemic [05:13] Determination to continue during the pandemic [08:02] State of the art in Yugen.ai and its future [11:32] Time to value defining ML to a business [13:01] Building a strong ML engineering culture [19:06] Data scientists patterns   [20:00] Helper functions   [22:45] Code review [25:32] Repeatable use cases [27:48] Minimum achievable goals [30:30] Production management goals [34:30] Use cases and System design document [36:20] Practices that helped Yugen.ai build ML systems   [40:05] Growing pains in the scaling process [43:54] Yugen.ai war stories [46:50] Unrealizing there's something wrong and there's actually something wrong [48:10] Data observability tools [49:42] Hands-on deck
undefined
Sep 21, 2021 • 48min

Learning and Teaching MLOps Applications // Salwa Muhammad // MLOps Coffee Sessions #55

Coffee Sessions #55 with Salwa Muhammad, Learning and Teaching MLOps Applications.   //Abstract Salwa shared her perspective on how FourthBrain and all learners can keep their education strategy fresh enough for the current zeitgeist. Furthermore, Salwa, Demetrios, and Vishnu talked about principles of effective learning that are important to keep in mind while embarking on any educational journey.   This was a great conversation with a lot of practical tips that we hope you all listen to! // Bio Salwa Nur Muhammad is the Founder/CEO of FourthBrain, an AI/ML education startup backed by Andrew Ng's AI Fund. FourthBrain trains Machine Learning engineers through a hybrid 2-3 month cohort-based programs that combine accountability of weekly instructor-led live sessions with the flexibility of online content. Salwa founded FourthBrain after executive leadership roles at Udacity and Trilogy Education Services (acquired by 2U Inc).    She has over 10 years of experience leveraging technology to develop scalable education programs at higher-ed institutions and ed-tech companies, building new business units, launching international programs, and hiring and training cross-functional teams. // Relevant Links https://www.fourthbrain.ai/ --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Vishnu on LinkedIn: https://www.linkedin.com/in/vrachakonda/ Connect with Salwa on LinkedIn: https://www.linkedin.com/in/salwanur/ Timestamps: [00:00] Introduction to Salwa Muhammad [01:20] Salwa's journey in tech [05:30] Advice to new ML engineers [10:21] Curriculum development process [17:36] FourthBrain's current status and what's next [21:53] Hardest piece in the course [24:49] Knowing the right job in a role confused world [30:05] Needing to upskill without going insane [35:10] Generalist vs Specialist on T-shaped Analogy [41:15] Counseling learners in terms of long-term progression [43:00] MLOps trajectories recommendation
undefined
Sep 10, 2021 • 49min

Machine Learning SRE // Niall Murphy // MLOps Coffee Sessions #54

Coffee Sessions #54 with Niall Murphy, Machine Learning SRE. //Abstract SRE is making its way into the machine learning world. Software engineering for machine learning requires reliability, performance, and maintainability. Site reliability engineering is the field that deals with reliability and ensuring constant, real-time performance. Niall Murphy, most recently Global Head of SRE at Microsoft Azure, helps us understand what SRE can do for modern ML products and teams. Building machine learning teams requires a diverse set of technical experiences, and Niall shares his thoughts on how to do that most effectively. Machine learning organizations need to start to take advantage of SRE best practices like SLOs, which Niall walks through. Production machine learning depends on high-quality software engineering, and we get Niall's take on how to ensure that in a machine learning context. // Bio Niall Murphy has been interested in Internet infrastructure since the mid-1990s. He has worked with all of the major cloud providers from their Dublin, Ireland offices - most recently at Microsoft, where he was global head of Azure Site Reliability Engineering (SRE). His books have sold approximately a quarter of a million copies worldwide, most notably the award-winning Site Reliability Engineering, and he is probably one of the few people in the world to hold degrees in Computer Science, Mathematics, and Poetry Studies. He lives in Dublin, Ireland, with his wife and two children. --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with David on LinkedIn: https://www.linkedin.com/in/aponteanalytics/ Connect with Vishnu on LinkedIn: https://www.linkedin.com/in/vrachakonda/ Connect with Niall on LinkedIn: https://www.linkedin.com/in/niallm/ Timestamps: [00:00] Introduction to Niall Murphy [00:36] SRE background to Machine Learning space transition [07:10] SLO's being a challenge in the ML space [09:42] SRE Hiring Investments [15:10] Behavior of teams concept [17:45] Challenges dealing with ML production [18:27] Update on Reliable Machine Learning book [22:46] Monitoring [25:05] Difference between ML and SRE [29:18] Incident response in Machine Learning [34:46] Rollbacks [35:50] Machine Learning burden overtime [42:42] Niall's journey to the SRE space and focus to develop himself
undefined
Sep 7, 2021 • 38min

MLOps Insights // David Aponte-Demetrios Brinkmann-Vishnu Rachakonda // MLOps Coffee Sessions #53

Coffee Sessions #53 with David Aponte, Demetrios Brinkmann, and Vishnu Rachakonda, MLOps Insights. //Abstract MLOps Insights from MLOps community core organizers Demetrios Brinkmann, Vishnu Rachakonda, and David Aponte. In this conversation the guys do a deep dive on testing with respect to MLOps, talk about what they have learned recently around the ML field, and what new things are happening with the MLOps community. //Bio David Aponte David is one of the organizers of the MLOps Community. He is an engineer, teacher, and lifelong student. He loves to build solutions to tough problems and share his learnings with others. He works out of NYC and loves to hike and box for fun. He enjoys meeting new people so feel free to reach out to him! Demetrios Brinkmann At the moment Demetrios is immersing himself in Machine Learning by interviewing experts from around the world in the weekly MLOps.community meetups. Demetrios is constantly learning and engaging in new activities to get uncomfortable and learn from his mistakes. He tries to bring creativity into every aspect of his life, whether that be analyzing the best paths forward, overcoming obstacles, or building lego houses with his daughter. Vishnu Rachakonda Vishnu is the operations lead for the MLOps Community and co-hosts the MLOps Coffee Sessions podcast. He is a machine learning engineer at Tesseract Health, a 4Catalyzer company focused on retinal imaging. In this role, he builds machine learning models for clinical workflow augmentation and diagnostics in on-device and cloud use cases. Since studying bioengineering at Penn, Vishnu has been actively working in the fields of computational biomedicine and MLOps. In his spare time, Vishnu enjoys suspending all logic to watch Indian action movies, playing chess, and writing. Other Links: Continuous Delivery for Machine Learning article by Martin Fowler: https://martinfowler.com/articles/cd4ml.html To Engineer Is Human book by Henry Petroski: https://www.amazon.com/Engineer-Human-Failure-Successful-Design/dp/0679734163 ----------- Connect With Us ✌️-------------    Join our Slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with David on LinkedIn: https://www.linkedin.com/in/aponteanalytics/ Connect with Vishnu on LinkedIn: https://www.linkedin.com/in/vrachakonda/ Timestamps: [00:14] Tests and how to do tests in MLOps [09:10] Learning from Vishnu and David's new job [12:42] How will it change? [19:48] Forcing to do the right thing vs allowing to do the wrong thing [21:54] Dealing with Machine Learning Models and Data [25:10] Feature store and monitoring compare page
undefined
Aug 31, 2021 • 50min

Vector Similarity Search at Scale // Dave Bergstein // MLOps Coffee Sessions #52

Coffee Sessions #52 with Dave Bergstein, Vector Similarity Search at Scale. //Abstract Ever wonder how Facebook and Spotify now seem to know you better than your friends? Or why the search feature in some products really “gets” you while in other products it feels stuck in the '90s? The difference is vector search— a method of indexing and searching through large volumes of vector embeddings to find more relevant search results and recommendations. Dave Bergstein, the Director of Product at Pinecone, joins us to describe how vector search is used by companies today, what are the challenges of deploying vector search to production applications, and how teams can overcome those challenges even without the engineering resources of Facebook or Spotify. // Bio Dave Bergstein is Director of Product at Pinecone. Dave previously held senior product roles at Tesseract Health and MathWorks where he was deeply involved with productionalizing AI. Dave holds a Ph.D. in Electrical Engineering from Boston University studying photonics. When not helping customers solve their AI challenges, Dave enjoys walking his dog Zeus and CrossFit. --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Vishnu on LinkedIn: https://www.linkedin.com/in/vrachakonda/ Connect with Dave on LinkedIn: https://www.linkedin.com/company/pinecone-io/mycompany/
undefined
Aug 17, 2021 • 53min

ML Security: Why should you care? // Sahbi Chaieb // MLOps Coffee Sessions #51

Coffee Sessions #51 with Sahbi Chaieb, ML security: Why should you care? //Abstract Sahbi, a senior data scientist at SAS, joined us to discuss the various security challenges in MLOps. We went deep into the research he found describing various threats as part of a recent paper he wrote. We also discussed tooling options for this problem that is emerging from companies like Microsoft and Google. // Bio Sahbi Chaieb is a Senior Data Scientist at SAS, he has been working on designing, implementing, and deploying Machine Learning solutions in various industries for the past 5 years. Sahbi graduated with an Engineering degree from Supélec, France, and holds an MS in Computer Science specialized in Machine Learning from Georgia Tech. --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Vishnu on LinkedIn: https://www.linkedin.com/in/vrachakonda/ Connect with Sahbi on LinkedIn: https://www.linkedin.com/in/sahbichaieb/ Timestamps: [00:00] Introduction to Sahbi Chaieb [01:25] Sahbi's background in tech [02:57] Inspiration of the article [09:40] Why should you care about keeping our model secure? [12:53] Model stealing [14:16] Development practices [17:24] Other tools in the toolbox covered in the article [21:29] Stories/occurrences where data was leaked [24:45] EU Regulations on robustness [26:49] Dangers of federated learning [31:50] Tooling status on model security [33:58] AI Red Teams [36:42] ML Security best practices [38:26] AI + Cyber Security [39:26] Synthetic Data [42:51] Prescription on ML Security in 5-10 years [46:37] Pain points encountered
undefined
Aug 12, 2021 • 48min

Creating MLOps Standards // Alex Chung and Srivathsan Canchi // MLOps Coffee Sessions #50

Coffee Sessions #50 with Alex Chung and Srivathsan Canchi, Creating MLOps Standards. // Abstract With the explosion in tools and opinionated frameworks for machine learning, it's very hard to define standards and best practices for MLOps and ML platforms. Based on their building AWS SageMaker and Intuit's ML Platform respectively, Alex Chung and Srivathsan Canchi talk with Demetrios and Vishnu about their experience navigating "tooling sprawl". They discuss their efforts to solve this problem organizationally with Social Good Technologies and technically with mlctl, the control plane for MLOps. // Bio Alex Chung Alex is a former Senior Product Manager at AWS Sagemaker and an ML Data Strategy and Ops lead at Facebook. He's passionate about the interoperability of MLOps tooling for enterprises as an avenue to accelerate the industry. Srivathsan Canchi Srivathsan leads the machine learning platform engineering team at Intuit. The ML platform includes real-time distributed featurization, scoring, and feedback loops. He has a breadth of experience building high scale mission-critical platforms. Srivathsan also has extensive experience with K8s at Intuit and previously at eBay, where his team was responsible for building a PaaS on top of K8s and OpenStack. --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Vishnu on LinkedIn: https://www.linkedin.com/in/vrachakonda/ Connect with Alex on LinkedIn: https://linkedin.com/in/alex-chung-gsd Connect with Sri on LinkedIn: https://www.linkedin.com/in/srivathsancanchi/ Timestamps: [00:00] Introduction to Alex Chung and Srivathsan Canchi [01:36] Alex's background in tech [03:07] Srivathsan's background in tech [04:36] What is SGT? [05:53] 3 Categories of SGT            1. Education            2. Standardization            3. Orchestration   [07:00] Standardization is desirable [13:03] Perspective from both sides   [13:39] Profile breakdown of Standardization [17:20] Importance of Standardization in enterprise [21:02] Tooling sprawl [24:04] Standardizing the different interfaces between MLOps tools [31:54] mlctl [33:35] mlctl's future [38:38] How mlctl helps the workflow of Intuit [41:00] CIGS evolve the different spaces
undefined
Aug 10, 2021 • 52min

Aggressively Helpful Platform Teams // Stefan Krawczyk // MLOps Coffee Sessions #49

Coffee Sessions #49 with Stefan Krawczyk, Aggressively Helpful Platform Teams. //Abstract At Stitch Fix there are 130+ “Full Stack Data Scientists” who in addition to doing data science work, are also expected to engineer and own data pipelines for their production models. One data science team, the Forecasting, Estimation, and Demand team were in a bind. Their data generation process was causing them iteration & operational frustrations in delivering time-series forecasts for the business. the solution? Hamilton, a novel python micro-framework, solved their pain points by changing their working paradigm. Some of the main workers on Hamilton are the dedicated engineering team called Data Platform. Data Platform builds services, tools, and abstractions to enable DS to operate in a full-stack manner avoiding hand-off. In the beginning, this meant DS built the web apps to serve model predictions, now as the layers of abstractions have been built over time, they still dictate what is deployed, but write much less code. // Bio Stefan loves the stimulus of working at the intersection of design, engineering, and data. He grew up in New Zealand, speaks Polish, and spent formative years at Stanford, LinkedIn, Nextdoor & Idibon. Outside of work in pre-covid times Stefan liked to 🏊, 🌮, 🍺, and ✈. // Other Links https://www.youtube.com/watch?v=B5Zp_30Knoo https://www.slideshare.net/StefanKrawczyk/hamilton-a-micro-framework-for-creating-dataframes https://www.slideshare.net/StefanKrawczyk/deployment-for-free-removing-the-need-to-write-model-deployment-code-at-stitch-fix --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Vishnu on LinkedIn: https://www.linkedin.com/in/vrachakonda/ Connect with Stefan on LinkedIn: https://linkedin.com/in/skrawczyk Timestamps: [00:00] Introduction to Stefan Krawczyk [00:37] Why Hamilton? [01:50] Stefan's background in tech [04:15] Model Life Cycle Team [06:48] Managing outcomes generated by data scientists   [09:04] Teams doing the same thing [12:41] Vision of getting code down to zero [18:40] Freedom and autonomy went wrong [21:17] Sub teams   [24:00] Create and deploy models easily [24:28] Interesting challenge to define [25:15] Stitch Fix Model productionization to be proud of [26:23] Hamilton to open-source [28:45] Model Envelope [31:45] Deployment for free [34:53] Use of Model Envelope in Model Artifact [37:16] Extending API definition in a model envelope for the model [39:00] Dependencies [40:08] Monitoring at scale [43:43] Advice in terms of neat abstraction [46:19] Envelope vs Container [47:33] Time frame of Hamilton's development and its benefits
undefined
Jul 27, 2021 • 52min

Tour of Upcoming Features on the Hugging Face Model Hub // Julien Chaumond // MLOps Coffee Sessions #48

Coffee Sessions #48 with Julien Chaumond, Tour of Upcoming Features on the Hugging Face Model Hub. //Abstract Julien Chaumond’s Tour of Upcoming Features on the Hugging Face Model Hub. Our MLOps community guest in this episode is Julien Chaumond the CTO of Hugging Face - every data scientist’s favorite NLP Swiss army knife. Julien, David, and Demetrios spoke about many topics including: Infra for hosting models/model hubs Inference widgets for companies with CPUs & GPUs (for companies) Auto NLP which trains models “Infrastructure as a service” // Bio Julien Chaumond is Chief Technical Officer at Hugging Face, a Brooklyn and Paris-based startup working on Machine learning and Natural Language Processing, and is passionate about democratizing state-of-the-art AI/ML for everyone. --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with David on LinkedIn: https://www.linkedin.com/in/aponteanalytics/ Connect with Julien on LinkedIn: https://www.linkedin.com/in/julienchaumond/ Timestamps: [00:00] Introduction to Julien Chaumond [01:57] Julien's background in tech [04:35] "I have this vision of building a community where the greatest people in AI can come together and basically invent the future of Machine Learning together." [04:55] What is Hugging Face? [06:17] "We have the goal of bridging the gap between research and production on actual production use cases." [06:45] Start of open-source in Hugging Face [07:50] Chatbox experiment (reference resolution system) - linking pronouns to the subjects of sentences [10:20] From a project to a company [11:46] "The goal was to explore in the beginning." [11:57] Importance of platform [14:25] "Transfer learning is an efficient way of Machine Learning. Providing your platform  around change that people want to start from pre-trained model and fine-tune them into the specific use case is something that can be big so we built some stuff to help people do that." [15:35] Narrowing down the scope of service to provide [16:27] "We have some vision of what we want to build but a lot of it is the small incremental improvements that we bring to the platform. I think it's the natural way of building stuff nowadays because Machine Learning is moving so fast." [20:00] Model Hubs [22:37] "We're guaranteeing that we don't build anything that introduces any lagging to Hugging Face because we're using Github. You'll have that peace of mind." [26:31] Storing model artifacts [27:00] AWS - cache - stored to an edge location all around the globe [28:39] Inference widgets powering [27:17] "For each model on the model hub we try to ensure that we have the metadata about the model to be able to actually run it." [32:11] Deploying infra function [32:38] "Depending on the model and library, we optimize the custom containers to make sure that they run as fast as possible on the target hardware that we have."    [34:59] "Machine Learning is still pretty much hardware dependent." [36:11] Hardware usage [39:04] "CPU is super cheap. If you are able to run Berks served with a 1-millisecond on CPU because you have powerful optimizations, you don't really need GPUs anymore. It's cost-efficient and energy-efficient."   [40:30] Challenges of Hugging Face and what you learned [41:10] "It may sound like a super cliche but the team that you assembled is everything." [43:22] War stories in Hugging Face [44:12] "Our goal is more forward-looking to be helpful as much as we can to the community." [48:25] Hugging Face accessibility

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode