AI Engineering Podcast

Tobias Macey
undefined
Sep 11, 2023 • 50min

Applying Federated Machine Learning To Sensitive Healthcare Data At Rhino Health

SummaryA core challenge of machine learning systems is getting access to quality data. This often means centralizing information in a single system, but that is impractical in highly regulated industries, such as healthchare. To address this hurdle Rhino Health is building a platform for federated learning on health data, so that everyone can maintain data privacy while benefiting from AI capabilities. In this episode Ittai Dayan explains the barriers to ML in healthcare and how they have designed the Rhino platform to overcome them.AnnouncementsHello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.Your host is Tobias Macey and today I'm interviewing Ittai Dayan about using federated learning at Rhino Health to bring AI capabilities to the tightly regulated healthcare industryInterviewIntroductionHow did you get involved in machine learning?Can you describe what Rhino Health is and the story behind it?What is federated learning and what are the trade-offs that it introduces? What are the benefits to healthcare and pharmalogical organizations from using federated learning?What are some of the challenges that you face in validating that patient data is properly de-identified in the federated models?Can you describe what the Rhino Health platform offers and how it is implemented? How have the design and goals of the system changed since you started working on it?What are the technological capabilities that are needed for an organization to be able to start using Rhino Health to gain insights into their patient and clinical data? How have you approached the design of your product to reduce the effort to onboard new customers and solutions?What are some examples of the types of automation that you are able to provide to your customers? (e.g. medical diagnosis, radiology review, health outcome predictions, etc.)What are the ethical and regulatory challenges that you have had to address in the development of your platform?What are the most interesting, innovative, or unexpected ways that you have seen Rhino Health used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Rhino Health?When is Rhino Health the wrong choice?What do you have planned for the future of Rhino Health?Contact InfoLinkedInParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workersLinksRhino HealthFederated LearningNvidia ClaraNvidia DGXMelloddyFlair NLPThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
undefined
Jun 17, 2023 • 43min

Using Machine Learning To Keep An Eye On The Planet

SummarySatellite imagery has given us a new perspective on our world, but it is limited by the field of view for the cameras. Synthetic Aperture Radar (SAR) allows for collecting images through clouds and in the dark, giving us a more consistent means of collecting data. In order to identify interesting details in such a vast amount of data it is necessary to use the power of machine learning. ICEYE has a fleet of satellites continuously collecting information about our planet. In this episode Tapio Friberg shares how they are applying ML to that data set to provide useful insights about fires, floods, and other terrestrial phenomena.AnnouncementsHello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.Your host is Tobias Macey and today I'm interviewing Tapio Friberg about building machine learning applications on top of SAR (Synthetic Aperture Radar) data to generate insights about our planetInterviewIntroductionHow did you get involved in machine learning?Can you describe what ICEYE is and the story behind it?What are some of the applications of ML at ICEYE?What are some of the ways that SAR data poses a unique challenge to ML applications?What are some of the elements of the ML workflow that you are able to use "off the shelf" and where are the areas that you have had to build custom solutions?Can you share the structure of your engineering team and the role that the ML function plays in the larger organization?What does the end-to-end workflow for your ML model development and deployment look like? What are the operational requirements for your models? (e.g. batch execution, real-time, interactive inference, etc.)In the model definitions, what are the elements of the source domain that create the largest challenges? (e.g. noise from backscatter, variance in resolution, etc.)Once you have an output from an ML model how do you manage mapping between data domains to reflect insights from SAR sources onto a human understandable representation?Given that SAR data and earth imaging is still a very niche domain, how does that influence your ability to hire for open positions and the ways that you think about your contributions to the overall ML ecosystem?How can your work on using SAR as a representation of physical attributes help to improve capabilities in e.g. LIDAR, computer vision, etc.?What are the most interesting, innovative, or unexpected ways that you have seen ICEYE and SAR data used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on ML for SAR data?What do you have planned for the future of ML applications at ICEYE?Contact InfoLinkedInParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workersLinksICEYESAR == Synthetic Aperture RadarTransfer LearningThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
undefined
May 29, 2023 • 47min

The Role Of Model Development In Machine Learning Systems

Josh Tobin discusses the shift in focus from model development to machine learning systems, the evolution of modeling in the machine learning ecosystem, the capabilities of Gantry in enhancing model performance and maintenance, core capabilities and flexible support for machine learning, innovative approaches and challenges in building and deploying machine learning models, and when to choose Gantry for model development and maintenance.
undefined
Mar 9, 2023 • 35min

Real-Time Machine Learning Has Entered The Realm Of The Possible

SummaryMachine learning models have predominantly been built and updated in a batch modality. While this is operationally simpler, it doesn't always provide the best experience or capabilities for end users of the model. Tecton has been investing in the infrastructure and workflows that enable building and updating ML models with real-time data to allow you to react to real-world events as they happen. In this episode CTO Kevin Stumpf explores they benefits of real-time machine learning and the systems that are necessary to support the development and maintenance of those models.AnnouncementsHello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.Your host is Tobias Macey and today I'm interviewing Kevin Stumpf about the challenges and promise of real-time ML applicationsInterviewIntroductionHow did you get involved in machine learning?Can you describe what real-time ML is and some examples of where it might be applied?What are the operational and organizational requirements for being able to adopt real-time approaches for ML projects?What are some of the ways that real-time requirements influence the scale/scope/architecture of an ML model?What are some of the failure modes for real-time vs analytical or operational ML?Given the low latency between source/input data being generated or received and a prediction being generated, how does that influence susceptibility to e.g. data drift? Data quality and accuracy also become more critical. What are some of the validation strategies that teams need to consider as they move to real-time?What are the most interesting, innovative, or unexpected ways that you have seen real-time ML applied?What are the most interesting, unexpected, or challenging lessons that you have learned while working on real-time ML systems?When is real-time the wrong choice for ML?What do you have planned for the future of real-time support for ML in Tecton?Contact InfoLinkedIn@kevinmstumpf on TwitterParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workersLinksTectonPodcast EpisodeData Engineering Podcast EpisodeUber MichelangeloReinforcement LearningOnline LearningRandom ForestChatGPTXGBoostLinear RegressionTrain-Serve SkewFlinkData Engineering Podcast EpisodeThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
undefined
Feb 2, 2023 • 1h 6min

How Shopify Built A Machine Learning Platform That Encourages Experimentation

SummaryShopify uses machine learning to power multiple features in their platform. In order to reduce the amount of effort required to develop and deploy models they have invested in building an opinionated platform for their engineers. They have gone through multiple iterations of the platform and their most recent version is called Merlin. In this episode Isaac Vidas shares the use cases that they are optimizing for, how it integrates into the rest of their data platform, and how they have designed it to let machine learning engineers experiment freely and safely.AnnouncementsHello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.Your host is Tobias Macey and today I'm interviewing Isaac Vidas about his work on the ML platform used by ShopifyInterviewIntroductionHow did you get involved in machine learning?Can you describe what Shopify is and some of the ways that you are using ML at Shopify? What are the challenges that you have encountered as an organization in applying ML to your business needs?Can you describe how you have designed your current technical platform for supporting ML workloads? Who are the target personas for this platform?What does the workflow look like for a given data scientist/ML engineer/etc.?What are the capabilities that you are trying to optimize for in your current platform? What are some of the previous iterations of ML infrastructure and process that you have built?What are the most useful lessons that you gathered from those previous experiences that informed your current approach?How have the capabilities of the Merlin platform influenced the ways that ML is viewed and applied across Shopify?What are the most interesting, innovative, or unexpected ways that you have seen Merlin used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Merlin?When is Merlin the wrong choice?What do you have planned for the future of Merlin?Contact Info@kazuaros on TwitterLinkedInkazuar on GitHubParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workersLinksShopifyShopify MerlinVertex AIscikit-learnXGBoostRayPodcast.__init__ EpisodePySparkGPT-3ChatGPTGoogle AIPyTorchPodcast.__init__ EpisodeDaskModinPodcast.__init__ EpisodeFlinkData Engineering Podcast EpisodeFeast Feature StoreKubernetesThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
undefined
Jan 24, 2023 • 59min

Applying Machine Learning To The Problem Of Bad Data At Anomalo

SummaryAll data systems are subject to the "garbage in, garbage out" problem. For machine learning applications bad data can lead to unreliable models and unpredictable results. Anomalo is a product designed to alert on bad data by applying machine learning models to various storage and processing systems. In this episode Jeremy Stanley discusses the various challenges that are involved in building useful and reliable machine learning models with unreliable data and the interesting problems that they are solving in the process.AnnouncementsHello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.Your host is Tobias Macey and today I'm interviewing Jeremy Stanley about his work at Anomalo, applying ML to the problem of data quality monitoringInterviewIntroductionHow did you get involved in machine learning?Can you describe what Anomalo is and the story behind it?What are some of the ML approaches that you are using to address challenges with data quality/observability?What are some of the difficulties posed by your application of ML technologies on data sets that you don't control? How does the scale and quality of data that you are working with influence/constrain the algorithmic approaches that you are using to build and train your models?How have you implemented the infrastructure and workflows that you are using to support your ML applications?What are some of the ways that you are addressing data quality challenges in your own platform? What are the opportunities that you have for dogfooding your product?What are the most interesting, innovative, or unexpected ways that you have seen Anomalo used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Anomalo?When is Anomalo the wrong choice?What do you have planned for the future of Anomalo?Contact Info@jeremystan on TwitterLinkedInParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workersLinksAnomaloData Engineering Podcast EpisodePartial Differential EquationsNeural NetworkNeural Networks For Pattern Recognition by Christopher M. Bishop (affiliate link)Gradient Boosted Decision TreesShapley ValuesSentrydbtAltairThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
undefined
4 snips
Dec 2, 2022 • 46min

Build More Reliable Machine Learning Systems With The Dagster Orchestration Engine

SummaryBuilding a machine learning model one time can be done in an ad-hoc manner, but if you ever want to update it and serve it in production you need a way of repeating a complex sequence of operations. Dagster is an orchestration engine that understands the data that it is manipulating so that you can move beyond coarse task-based representations of your dependencies. In this episode Sandy Ryza explains how his background in machine learning has informed his work on the Dagster project and the foundational principles that it is built on to allow for collaboration across data engineering and machine learning concerns.InterviewIntroductionHow did you get involved in machine learning?Can you start by sharing a definition of "orchestration" in the context of machine learning projects?What is your assessment of the state of the orchestration ecosystem as it pertains to ML?modeling cycles and managing experiment iterations in the execution graphhow to balance flexibility with repeatability What are the most interesting, innovative, or unexpected ways that you have seen orchestration implemented/applied for machine learning?What are the most interesting, unexpected, or challenging lessons that you have learned while working on orchestration of ML workflows?When is Dagster the wrong choice?What do you have planned for the future of ML support in Dagster?Contact InfoLinkedIn@s_ryz on Twittersryza on GitHubParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workersLinksDagsterData Engineering Podcast EpisodeClouderaHadoopApache SparkPeter NorvigJosh WillsREPL == Read Eval Print LoopRStudioMemoizationMLFlowKedroData Engineering Podcast EpisodeMetaflowPodcast.__init__ EpisodeKubeflowdbtData Engineering Podcast EpisodeAirbyteData Engineering Podcast EpisodeThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
undefined
Sep 28, 2022 • 52min

Solve The Cold Start Problem For Machine Learning By Letting Humans Teach The Computer With Aitomatic

SummaryMachine learning is a data-hungry approach to problem solving. Unfortunately, there are a number of problems that would benefit from the automation provided by artificial intelligence capabilities that don’t come with troves of data to build from. Christopher Nguyen and his team at Aitomatic are working to address the "cold start" problem for ML by letting humans generate models by sharing their expertise through natural language. In this episode he explains how that works, the various ways that we can start to layer machine learning capabilities on top of each other, as well as the risks involved in doing so without incorporating lessons learned in the growth of the software industry.AnnouncementsHello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.Predibase is a low-code ML platform without low-code limits. Built on top of our open source foundations of Ludwig and Horovod, our platform allows you to train state-of-the-art ML and deep learning models on your datasets at scale. Our platform works on text, images, tabular, audio and multi-modal data using our novel compositional model architecture. We allow users to operationalize models on top of the modern data stack, through REST and PQL – an extension of SQL that puts predictive power in the hands of data practitioners. Go to themachinelearningpodcast.com/predibase today to learn more and try it out!Your host is Tobias Macey and today I’m interviewing Christopher Nguyen about how to address the cold start problem for ML/AI projectsInterviewIntroductionHow did you get involved in machine learning?Can you describe what the "cold start" or "small data" problem is and its impact on an organization’s ability to invest in machine learning?What are some examples of use cases where ML is a viable solution but there is a corresponding lack of usable data?How does the model design influence the data requirements to build it? (e.g. statistical model vs. deep learning, etc.)What are the available options for addressing a lack of data for ML? What are the characteristics of a given data set that make it suitable for ML use cases?Can you describe what you are building at Aitomatic and how it helps to address the cold start problem? How have the design and goals of the product changed since you first started working on it?What are some of the education challenges that you face when working with organizations to help them understand how to think about ML/AI investment and practical limitations? What are the most interesting, innovative, or unexpected ways that you have seen Aitomatic/H1st used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Aitomatic/H1st?When is a human/knowledge driven approach to ML development the wrong choice?What do you have planned for the future of Aitomatic?Contact InfoLinkedIn@pentagoniac on TwitterGoogle ScholarParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workersLinksAitomaticHuman First AIKnowledge First World SymposiumAtari 800Cold start problemScale AISnorkel AIPodcast EpisodeAnomaly DetectionExpert SystemsICML == International Conference on Machine LearningNIST == National Institute of Standards and TechnologyMulti-modal ModelSVM == Support Vector MachineTensorflowPytorchPodcast.__init__ EpisodeOSS CapitalDALL-EThe intro and outro music is from Hitman’s Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
undefined
Sep 21, 2022 • 52min

Convert Your Unstructured Data To Embedding Vectors For More Efficient Machine Learning With Towhee

SummaryData is one of the core ingredients for machine learning, but the format in which it is understandable to humans is not a useful representation for models. Embedding vectors are a way to structure data in a way that is native to how models interpret and manipulate information. In this episode Frank Liu shares how the Towhee library simplifies the work of translating your unstructured data assets (e.g. images, audio, video, etc.) into embeddings that you can use efficiently for machine learning, and how it fits into your workflow for model development.AnnouncementsHello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.Building good ML models is hard, but testing them properly is even harder. At Deepchecks, they built an open-source testing framework that follows best practices, ensuring that your models behave as expected. Get started quickly using their built-in library of checks for testing and validating your model’s behavior and performance, and extend it to meet your specific needs as your model evolves. Accelerate your machine learning projects by building trust in your models and automating the testing that you used to do manually. Go to themachinelearningpodcast.com/deepchecks today to get started!Your host is Tobias Macey and today I’m interviewing Frank Liu about how to use vector embeddings in your ML projects and how Towhee can reduce the effort involvedInterviewIntroductionHow did you get involved in machine learning?Can you describe what Towhee is and the story behind it?What is the problem that Towhee is aimed at solving?What are the elements of generating vector embeddings that pose the greatest challenge or require the most effort?Once you have an embedding, what are some of the ways that it might be used in a machine learning project? Are there any design considerations that need to be addressed in the form that an embedding takes and how it impacts the resultant model that relies on it? (whether for training or inference)Can you describe how the Towhee framework is implemented? What are some of the interesting engineering challenges that needed to be addressed?How have the design/goals/scope of the project shifted since it began?What is the workflow for someone using Towhee in the context of an ML project?What are some of the types optimizations that you have incorporated into Towhee? What are some of the scaling considerations that users need to be aware of as they increase the volume or complexity of data that they are processing?What are some of the ways that using Towhee impacts the way a data scientist or ML engineer approach the design development of their model code?What are the interfaces available for integrating with and extending Towhee?What are the most interesting, innovative, or unexpected ways that you have seen Towhee used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Towhee?When is Towhee the wrong choice?What do you have planned for the future of Towhee?Contact InfoLinkedInfzliu on GitHubWebsite@frankzliu on TwitterParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workersLinksTowheeZillizMilvusData Engineering Podcast EpisodeComputer VisionTensorAutoencoderLatent SpaceDiffusion ModelHSL == Hue, Saturation, LightnessWeights and BiasesThe intro and outro music is from Hitman’s Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
undefined
Sep 14, 2022 • 1h 3min

Shedding Light On Silent Model Failures With NannyML

SummaryBecause machine learning models are constantly interacting with inputs from the real world they are subject to a wide variety of failures. The most commonly discussed error condition is concept drift, but there are numerous other ways that things can go wrong. In this episode Wojtek Kuberski explains how NannyML is designed to compare the predicted performance of your model against its actual behavior to identify silent failures and provide context to allow you to determine whether and how urgently to address them.AnnouncementsHello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.Data powers machine learning, but poor data quality is the largest impediment to effective ML today. Galileo is a collaborative data bench for data scientists building Natural Language Processing (NLP) models to programmatically inspect, fix and track their data across the ML workflow (pre-training, post-training and post-production) – no more excel sheets or ad-hoc python scripts. Get meaningful gains in your model performance fast, dramatically reduce data labeling and procurement costs, while seeing 10x faster ML iterations. Galileo is offering listeners a free 30 day trial and a 30% discount on the product there after. This offer is available until Aug 31, so go to themachinelearningpodcast.com/galileo and request a demo today!Your host is Tobias Macey and today I’m interviewing Wojtek Kuberski about NannyML and the work involved in post-deployment data scienceInterviewIntroductionHow did you get involved in machine learning?Can you describe what NannyML is and the story behind it?What is "post-deployment data science"? How does it differ from the metrics/monitoring approach to managing the model lifecycle?Who is typically responsible for this work? How does NannyML augment their skills?What are some of your experiences with model failure that motivated you to spend your time and focus on this problem?What are the main contributing factors to alert fatigue for ML systems?What are some of the ways that a model can fail silently? How does NannyML detect those conditions?What are the remediation actions that might be necessary once an issue is detected in a model?Can you describe how NannyML is implemented? What are some of the technical and UX design problems that you have had to address?What are some of the ideas/assumptions that you have had to re-evaluate in the process of building NannyML?What additional capabilities are necessary for supporting less structured data?Can you describe what is involved in setting up NannyML and how it fits into an ML engineer’s workflow? Once a model is deployed, what additional outputs/data can/should be collected to improve the utility of NannyML and feed into analysis of the real-world operation?What are the most interesting, innovative, or unexpected ways that you have seen NannyML used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on NannyML?When is NannyML the wrong choice?What do you have planned for the future of NannyML?Contact InfoLinkedInParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workersLinksNannyMLF1 ScoreROC CurveConcept DriftA/B TestingJupyter NotebookVector EmbeddingAirflowEDA == Exploratory Data AnalysisInspired book (affiliate link)ZenMLThe intro and outro music is from Hitman’s Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app