AI Engineering Podcast cover image

AI Engineering Podcast

Latest episodes

undefined
Mar 9, 2023 • 35min

Real-Time Machine Learning Has Entered The Realm Of The Possible

SummaryMachine learning models have predominantly been built and updated in a batch modality. While this is operationally simpler, it doesn't always provide the best experience or capabilities for end users of the model. Tecton has been investing in the infrastructure and workflows that enable building and updating ML models with real-time data to allow you to react to real-world events as they happen. In this episode CTO Kevin Stumpf explores they benefits of real-time machine learning and the systems that are necessary to support the development and maintenance of those models.AnnouncementsHello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.Your host is Tobias Macey and today I'm interviewing Kevin Stumpf about the challenges and promise of real-time ML applicationsInterviewIntroductionHow did you get involved in machine learning?Can you describe what real-time ML is and some examples of where it might be applied?What are the operational and organizational requirements for being able to adopt real-time approaches for ML projects?What are some of the ways that real-time requirements influence the scale/scope/architecture of an ML model?What are some of the failure modes for real-time vs analytical or operational ML?Given the low latency between source/input data being generated or received and a prediction being generated, how does that influence susceptibility to e.g. data drift? Data quality and accuracy also become more critical. What are some of the validation strategies that teams need to consider as they move to real-time?What are the most interesting, innovative, or unexpected ways that you have seen real-time ML applied?What are the most interesting, unexpected, or challenging lessons that you have learned while working on real-time ML systems?When is real-time the wrong choice for ML?What do you have planned for the future of real-time support for ML in Tecton?Contact InfoLinkedIn@kevinmstumpf on TwitterParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workersLinksTectonPodcast EpisodeData Engineering Podcast EpisodeUber MichelangeloReinforcement LearningOnline LearningRandom ForestChatGPTXGBoostLinear RegressionTrain-Serve SkewFlinkData Engineering Podcast EpisodeThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
undefined
Feb 2, 2023 • 1h 6min

How Shopify Built A Machine Learning Platform That Encourages Experimentation

SummaryShopify uses machine learning to power multiple features in their platform. In order to reduce the amount of effort required to develop and deploy models they have invested in building an opinionated platform for their engineers. They have gone through multiple iterations of the platform and their most recent version is called Merlin. In this episode Isaac Vidas shares the use cases that they are optimizing for, how it integrates into the rest of their data platform, and how they have designed it to let machine learning engineers experiment freely and safely.AnnouncementsHello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.Your host is Tobias Macey and today I'm interviewing Isaac Vidas about his work on the ML platform used by ShopifyInterviewIntroductionHow did you get involved in machine learning?Can you describe what Shopify is and some of the ways that you are using ML at Shopify? What are the challenges that you have encountered as an organization in applying ML to your business needs?Can you describe how you have designed your current technical platform for supporting ML workloads? Who are the target personas for this platform?What does the workflow look like for a given data scientist/ML engineer/etc.?What are the capabilities that you are trying to optimize for in your current platform? What are some of the previous iterations of ML infrastructure and process that you have built?What are the most useful lessons that you gathered from those previous experiences that informed your current approach?How have the capabilities of the Merlin platform influenced the ways that ML is viewed and applied across Shopify?What are the most interesting, innovative, or unexpected ways that you have seen Merlin used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Merlin?When is Merlin the wrong choice?What do you have planned for the future of Merlin?Contact Info@kazuaros on TwitterLinkedInkazuar on GitHubParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workersLinksShopifyShopify MerlinVertex AIscikit-learnXGBoostRayPodcast.__init__ EpisodePySparkGPT-3ChatGPTGoogle AIPyTorchPodcast.__init__ EpisodeDaskModinPodcast.__init__ EpisodeFlinkData Engineering Podcast EpisodeFeast Feature StoreKubernetesThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
undefined
Jan 24, 2023 • 59min

Applying Machine Learning To The Problem Of Bad Data At Anomalo

SummaryAll data systems are subject to the "garbage in, garbage out" problem. For machine learning applications bad data can lead to unreliable models and unpredictable results. Anomalo is a product designed to alert on bad data by applying machine learning models to various storage and processing systems. In this episode Jeremy Stanley discusses the various challenges that are involved in building useful and reliable machine learning models with unreliable data and the interesting problems that they are solving in the process.AnnouncementsHello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.Your host is Tobias Macey and today I'm interviewing Jeremy Stanley about his work at Anomalo, applying ML to the problem of data quality monitoringInterviewIntroductionHow did you get involved in machine learning?Can you describe what Anomalo is and the story behind it?What are some of the ML approaches that you are using to address challenges with data quality/observability?What are some of the difficulties posed by your application of ML technologies on data sets that you don't control? How does the scale and quality of data that you are working with influence/constrain the algorithmic approaches that you are using to build and train your models?How have you implemented the infrastructure and workflows that you are using to support your ML applications?What are some of the ways that you are addressing data quality challenges in your own platform? What are the opportunities that you have for dogfooding your product?What are the most interesting, innovative, or unexpected ways that you have seen Anomalo used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Anomalo?When is Anomalo the wrong choice?What do you have planned for the future of Anomalo?Contact Info@jeremystan on TwitterLinkedInParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workersLinksAnomaloData Engineering Podcast EpisodePartial Differential EquationsNeural NetworkNeural Networks For Pattern Recognition by Christopher M. Bishop (affiliate link)Gradient Boosted Decision TreesShapley ValuesSentrydbtAltairThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
undefined
Dec 2, 2022 • 46min

Build More Reliable Machine Learning Systems With The Dagster Orchestration Engine

SummaryBuilding a machine learning model one time can be done in an ad-hoc manner, but if you ever want to update it and serve it in production you need a way of repeating a complex sequence of operations. Dagster is an orchestration engine that understands the data that it is manipulating so that you can move beyond coarse task-based representations of your dependencies. In this episode Sandy Ryza explains how his background in machine learning has informed his work on the Dagster project and the foundational principles that it is built on to allow for collaboration across data engineering and machine learning concerns.InterviewIntroductionHow did you get involved in machine learning?Can you start by sharing a definition of "orchestration" in the context of machine learning projects?What is your assessment of the state of the orchestration ecosystem as it pertains to ML?modeling cycles and managing experiment iterations in the execution graphhow to balance flexibility with repeatability What are the most interesting, innovative, or unexpected ways that you have seen orchestration implemented/applied for machine learning?What are the most interesting, unexpected, or challenging lessons that you have learned while working on orchestration of ML workflows?When is Dagster the wrong choice?What do you have planned for the future of ML support in Dagster?Contact InfoLinkedIn@s_ryz on Twittersryza on GitHubParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workersLinksDagsterData Engineering Podcast EpisodeClouderaHadoopApache SparkPeter NorvigJosh WillsREPL == Read Eval Print LoopRStudioMemoizationMLFlowKedroData Engineering Podcast EpisodeMetaflowPodcast.__init__ EpisodeKubeflowdbtData Engineering Podcast EpisodeAirbyteData Engineering Podcast EpisodeThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
undefined
Sep 28, 2022 • 52min

Solve The Cold Start Problem For Machine Learning By Letting Humans Teach The Computer With Aitomatic

SummaryMachine learning is a data-hungry approach to problem solving. Unfortunately, there are a number of problems that would benefit from the automation provided by artificial intelligence capabilities that don’t come with troves of data to build from. Christopher Nguyen and his team at Aitomatic are working to address the "cold start" problem for ML by letting humans generate models by sharing their expertise through natural language. In this episode he explains how that works, the various ways that we can start to layer machine learning capabilities on top of each other, as well as the risks involved in doing so without incorporating lessons learned in the growth of the software industry.AnnouncementsHello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.Predibase is a low-code ML platform without low-code limits. Built on top of our open source foundations of Ludwig and Horovod, our platform allows you to train state-of-the-art ML and deep learning models on your datasets at scale. Our platform works on text, images, tabular, audio and multi-modal data using our novel compositional model architecture. We allow users to operationalize models on top of the modern data stack, through REST and PQL – an extension of SQL that puts predictive power in the hands of data practitioners. Go to themachinelearningpodcast.com/predibase today to learn more and try it out!Your host is Tobias Macey and today I’m interviewing Christopher Nguyen about how to address the cold start problem for ML/AI projectsInterviewIntroductionHow did you get involved in machine learning?Can you describe what the "cold start" or "small data" problem is and its impact on an organization’s ability to invest in machine learning?What are some examples of use cases where ML is a viable solution but there is a corresponding lack of usable data?How does the model design influence the data requirements to build it? (e.g. statistical model vs. deep learning, etc.)What are the available options for addressing a lack of data for ML? What are the characteristics of a given data set that make it suitable for ML use cases?Can you describe what you are building at Aitomatic and how it helps to address the cold start problem? How have the design and goals of the product changed since you first started working on it?What are some of the education challenges that you face when working with organizations to help them understand how to think about ML/AI investment and practical limitations? What are the most interesting, innovative, or unexpected ways that you have seen Aitomatic/H1st used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Aitomatic/H1st?When is a human/knowledge driven approach to ML development the wrong choice?What do you have planned for the future of Aitomatic?Contact InfoLinkedIn@pentagoniac on TwitterGoogle ScholarParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workersLinksAitomaticHuman First AIKnowledge First World SymposiumAtari 800Cold start problemScale AISnorkel AIPodcast EpisodeAnomaly DetectionExpert SystemsICML == International Conference on Machine LearningNIST == National Institute of Standards and TechnologyMulti-modal ModelSVM == Support Vector MachineTensorflowPytorchPodcast.__init__ EpisodeOSS CapitalDALL-EThe intro and outro music is from Hitman’s Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
undefined
Sep 21, 2022 • 52min

Convert Your Unstructured Data To Embedding Vectors For More Efficient Machine Learning With Towhee

SummaryData is one of the core ingredients for machine learning, but the format in which it is understandable to humans is not a useful representation for models. Embedding vectors are a way to structure data in a way that is native to how models interpret and manipulate information. In this episode Frank Liu shares how the Towhee library simplifies the work of translating your unstructured data assets (e.g. images, audio, video, etc.) into embeddings that you can use efficiently for machine learning, and how it fits into your workflow for model development.AnnouncementsHello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.Building good ML models is hard, but testing them properly is even harder. At Deepchecks, they built an open-source testing framework that follows best practices, ensuring that your models behave as expected. Get started quickly using their built-in library of checks for testing and validating your model’s behavior and performance, and extend it to meet your specific needs as your model evolves. Accelerate your machine learning projects by building trust in your models and automating the testing that you used to do manually. Go to themachinelearningpodcast.com/deepchecks today to get started!Your host is Tobias Macey and today I’m interviewing Frank Liu about how to use vector embeddings in your ML projects and how Towhee can reduce the effort involvedInterviewIntroductionHow did you get involved in machine learning?Can you describe what Towhee is and the story behind it?What is the problem that Towhee is aimed at solving?What are the elements of generating vector embeddings that pose the greatest challenge or require the most effort?Once you have an embedding, what are some of the ways that it might be used in a machine learning project? Are there any design considerations that need to be addressed in the form that an embedding takes and how it impacts the resultant model that relies on it? (whether for training or inference)Can you describe how the Towhee framework is implemented? What are some of the interesting engineering challenges that needed to be addressed?How have the design/goals/scope of the project shifted since it began?What is the workflow for someone using Towhee in the context of an ML project?What are some of the types optimizations that you have incorporated into Towhee? What are some of the scaling considerations that users need to be aware of as they increase the volume or complexity of data that they are processing?What are some of the ways that using Towhee impacts the way a data scientist or ML engineer approach the design development of their model code?What are the interfaces available for integrating with and extending Towhee?What are the most interesting, innovative, or unexpected ways that you have seen Towhee used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Towhee?When is Towhee the wrong choice?What do you have planned for the future of Towhee?Contact InfoLinkedInfzliu on GitHubWebsite@frankzliu on TwitterParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workersLinksTowheeZillizMilvusData Engineering Podcast EpisodeComputer VisionTensorAutoencoderLatent SpaceDiffusion ModelHSL == Hue, Saturation, LightnessWeights and BiasesThe intro and outro music is from Hitman’s Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
undefined
Sep 14, 2022 • 1h 3min

Shedding Light On Silent Model Failures With NannyML

SummaryBecause machine learning models are constantly interacting with inputs from the real world they are subject to a wide variety of failures. The most commonly discussed error condition is concept drift, but there are numerous other ways that things can go wrong. In this episode Wojtek Kuberski explains how NannyML is designed to compare the predicted performance of your model against its actual behavior to identify silent failures and provide context to allow you to determine whether and how urgently to address them.AnnouncementsHello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.Data powers machine learning, but poor data quality is the largest impediment to effective ML today. Galileo is a collaborative data bench for data scientists building Natural Language Processing (NLP) models to programmatically inspect, fix and track their data across the ML workflow (pre-training, post-training and post-production) – no more excel sheets or ad-hoc python scripts. Get meaningful gains in your model performance fast, dramatically reduce data labeling and procurement costs, while seeing 10x faster ML iterations. Galileo is offering listeners a free 30 day trial and a 30% discount on the product there after. This offer is available until Aug 31, so go to themachinelearningpodcast.com/galileo and request a demo today!Your host is Tobias Macey and today I’m interviewing Wojtek Kuberski about NannyML and the work involved in post-deployment data scienceInterviewIntroductionHow did you get involved in machine learning?Can you describe what NannyML is and the story behind it?What is "post-deployment data science"? How does it differ from the metrics/monitoring approach to managing the model lifecycle?Who is typically responsible for this work? How does NannyML augment their skills?What are some of your experiences with model failure that motivated you to spend your time and focus on this problem?What are the main contributing factors to alert fatigue for ML systems?What are some of the ways that a model can fail silently? How does NannyML detect those conditions?What are the remediation actions that might be necessary once an issue is detected in a model?Can you describe how NannyML is implemented? What are some of the technical and UX design problems that you have had to address?What are some of the ideas/assumptions that you have had to re-evaluate in the process of building NannyML?What additional capabilities are necessary for supporting less structured data?Can you describe what is involved in setting up NannyML and how it fits into an ML engineer’s workflow? Once a model is deployed, what additional outputs/data can/should be collected to improve the utility of NannyML and feed into analysis of the real-world operation?What are the most interesting, innovative, or unexpected ways that you have seen NannyML used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on NannyML?When is NannyML the wrong choice?What do you have planned for the future of NannyML?Contact InfoLinkedInParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workersLinksNannyMLF1 ScoreROC CurveConcept DriftA/B TestingJupyter NotebookVector EmbeddingAirflowEDA == Exploratory Data AnalysisInspired book (affiliate link)ZenMLThe intro and outro music is from Hitman’s Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
undefined
Sep 10, 2022 • 54min

How To Design And Build Machine Learning Systems For Reasonable Scale

SummaryUsing machine learning in production requires a sophisticated set of cooperating technologies. A majority of resources that are available for understanding how to design and operate these platforms are focused on either simple examples that don’t scale, or over-engineered technologies designed for the massive scale of big tech companies. In this episode Jacopo Tagliabue shares his vision for "ML at reasonable scale" and how you can adopt these patterns for building your own platforms.AnnouncementsHello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.Do you wish you could use artificial intelligence to drive your business the way Big Tech does, but don’t have a money printer? Graft is a cloud-native platform that aims to make the AI of the 1% accessible to the 99%. Wield the most advanced techniques for unlocking the value of data, including text, images, video, audio, and graphs. No machine learning skills required, no team to hire, and no infrastructure to build or maintain. For more information on Graft or to schedule a demo, visit themachinelearningpodcast.com/graft today and tell them Tobias sent you.Your host is Tobias Macey and today I’m interviewing Jacopo Tagliabue about building "reasonable scale" ML systemsInterviewIntroductionHow did you get involved in machine learning?How would you describe the current state of the ecosystem for ML practitioners? (e.g. tool selection, availability of information/tutorials, etc.) What are some of the notable changes that you have seen over the past 2 – 5 years?How have the evolutions in the data engineering space been reflected in/influenced the way that ML is being done?What are the challenges/points of friction that ML practitioners have to contend with when trying to get a model into production that isn’t just a toy?You wrote a set of tutorials and accompanying code about performing ML at "reasonable scale". What are you aiming to represent with that phrasing? There is a paradox of choice for any newcomer to ML. What are some of the key capabilities that practitioners should use in their decision rubric when designing a "reasonable scale" system?What are some of the common bottlenecks that crop up when moving from an initial test implementation to a scalable deployment that is serving customer traffic?How much of an impact does the type of ML problem being addressed have on the deployment and scalability elements of the system design? (e.g. NLP vs. computer vision vs. recommender system, etc.)What are some of the misleading pieces of advice that you have seen from "big tech" tutorials about how to do ML that are unnecessary when running at smaller scales?You also spend some time discussing the benefits of a "NoOps" approach to ML deployment. At what point do operations/infrastructure engineers need to get involved? What are the operational aspects of ML applications that infrastructure engineers working in product teams might be unprepared for?What are the most interesting, innovative, or unexpected system designs that you have seen for moderate scale MLOps?What are the most interesting, unexpected, or challenging lessons that you have learned while working on ML system design and implementation?What are the aspects of ML systems design that you are paying attention to in the current ecosystem?What advice do you have for additional references or research that ML practitioners would benefit from when designing their own production systems?Contact Infojacopotagliabue on GitHubWebsiteLinkedInParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workersLinksThe Post-Modern Stack: ML At Reasonable ScaleCoveoNLP == Natural Language ProcessingRecListPart of speech taggingMarkov ModelYDNABB (You Don’t Need A Bigger Boat)dbtData Engineering Podcast EpisodeSeldonMetaflowPodcast.__init__ EpisodeSnowflakeInformation RetrievalModern Data StackSQLiteSpark SQLAWS AthenaKerasPyTorchLuigiAirflowFlaskAWS FargateAWS SagemakerRecommendations At Reasonable ScalePineconeData Engineering Podcast EpisodeRedisKNN == K-Nearest NeighborsPinterest Engineering BlogMaterializeOpenAIThe intro and outro music is from Hitman’s Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
undefined
Sep 9, 2022 • 59min

Building A Business Powered By Machine Learning At Assembly AI

SummaryThe increasing sophistication of machine learning has enabled dramatic transformations of businesses and introduced new product categories. At Assembly AI they are offering advanced speech recognition and natural language models as an API service. In this episode founder Dylan Fox discusses the unique challenges of building a business with machine learning as the core product.AnnouncementsHello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.Predibase is a low-code ML platform without low-code limits. Built on top of our open source foundations of Ludwig and Horovod, our platform allows you to train state-of-the-art ML and deep learning models on your datasets at scale. Our platform works on text, images, tabular, audio and multi-modal data using our novel compositional model architecture. We allow users to operationalize models on top of the modern data stack, through REST and PQL – an extension of SQL that puts predictive power in the hands of data practitioners. Go to themachinelearningpodcast.com/predibase today to learn more and try it out!Your host is Tobias Macey and today I’m interviewing Dylan Fox about building and growing a business with ML as its core offeringInterviewIntroductionHow did you get involved in machine learning?Can you describe what Assembly is and the story behind it? For anyone who isn’t familiar with your platform, can you describe the role that ML/AI plays in your product?What was your process for going from idea to prototype for an AI powered business? Can you offer parallels between your own experience and that of your peers who are building businesses oriented more toward pure software applications?How are you structuring your teams?On the path to your current scale and capabilities how have you managed scoping of your model capabilities and operational scale to avoid getting bogged down or burnt out?How do you think about scoping of model functionality to balance composability and system complexity?What is your process for identifying and understanding which problems are suited to ML and when to rely on pure software?You are constantly iterating on model performance and introducing new capabilities. How do you manage prototyping and experimentation cycles? What are the metrics that you track to identify whether and when to move from an experimental to an operational state with a model?What is your process for understanding what’s possible and what can feasibly operate at scale?Can you describe your overall operational patterns delivery process for ML?What are some of the most useful investments in tooling that you have made to manage development experience for your teams?Once you have a model in operation, how do you manage performance tuning? (from both a model and an operational scalability perspective)What are the most interesting, innovative, or unexpected aspects of ML development and maintenance that you have encountered while building and growing the Assembly platform?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Assembly?When is ML the wrong choice?What do you have planned for the future of Assembly?Contact Info@YouveGotFox on TwitterLinkedInParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workersLinksAssembly AIPodcast.__init__ EpisodeLearn Python the Hard WayNLTKNLP == Natural Language ProcessingNLU == Natural Language UnderstandingSpeech RecognitionTensorflowr/machinelearningSciPyPyTorchJaxHuggingFaceRNN == Recurrent Neural NetworkCNN == Convolutional Neural NetworkLSTM == Long Short Term MemoryHidden Markov ModelsBaidu DeepSpeechCTC (Connectionist Temporal Classification) Loss ModelTwilioGrid SearchK80 GPUA100 GPUTPU == Tensor Processing UnitFoundation ModelsBLOOM Language ModelDALL-E 2The intro and outro music is from Hitman’s Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
undefined
Aug 26, 2022 • 1h 15min

Update Your Model's View Of The World In Real Time With Streaming Machine Learning Using River

SummaryThe majority of machine learning projects that you read about or work on are built around batch processes. The model is trained, and then validated, and then deployed, with each step being a discrete and isolated task. Unfortunately, the real world is rarely static, leading to concept drift and model failures. River is a framework for building streaming machine learning projects that can constantly adapt to new information. In this episode Max Halford explains how the project works, why you might (or might not) want to consider streaming ML, and how to get started building with River.AnnouncementsHello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.Building good ML models is hard, but testing them properly is even harder. At Deepchecks, they built an open-source testing framework that follows best practices, ensuring that your models behave as expected. Get started quickly using their built-in library of checks for testing and validating your model’s behavior and performance, and extend it to meet your specific needs as your model evolves. Accelerate your machine learning projects by building trust in your models and automating the testing that you used to do manually. Go to themachinelearningpodcast.com/deepchecks today to get started!Your host is Tobias Macey and today I’m interviewing Max Halford about River, a Python toolkit for streaming and online machine learningInterviewIntroductionHow did you get involved in machine learning?Can you describe what River is and the story behind it?What is "online" machine learning? What are the practical differences with batch ML?Why is batch learning so predominant?What are the cases where someone would want/need to use online or streaming ML?The prevailing pattern for batch ML model lifecycles is to train, deploy, monitor, repeat. What does the ongoing maintenance for a streaming ML model look like? Concept drift is typically due to a discrepancy between the data used to train a model and the actual data being observed. How does the use of online learning affect the incidence of drift?Can you describe how the River framework is implemented? How have the design and goals of the project changed since you started working on it?How do the internal representations of the model differ from batch learning to allow for incremental updates to the model state?In the documentation you note the use of Python dictionaries for state management and the flexibility offered by that choice. What are the benefits and potential pitfalls of that decision?Can you describe the process of using River to design, implement, and validate a streaming ML model? What are the operational requirements for deploying and serving the model once it has been developed?What are some of the challenges that users of River might run into if they are coming from a batch learning background?What are the most interesting, innovative, or unexpected ways that you have seen River used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on River?When is River the wrong choice?What do you have planned for the future of River?Contact InfoEmail@halford_max on TwitterMaxHalford on GitHubParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workersLinksRiverscikit-multiflowFederated Machine LearningHogwild! Google PaperChip Huyen concept drift blog postDan Crenshaw Berkeley Clipper MLOpsRobustness PrincipleNY Taxi DatasetRiverTorchRiver Public RoadmapBeaver tool for deploying online modelsProdigy ML human in the loop labelingThe intro and outro music is from Hitman’s Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode