AI Engineering Podcast cover image

AI Engineering Podcast

Latest episodes

undefined
Aug 16, 2022 • 1h 8min

Using AI To Transform Your Business Without The Headache Using Graft

SummaryMachine learning is a transformative tool for the organizations that can take advantage of it. While the frameworks and platforms for building machine learning applications are becoming more powerful and broadly available, there is still a significant investment of time, money, and talent required to take full advantage of it. In order to reduce that barrier further Adam Oliner and Brian Calvert, along with their other co-founders, started Graft. In this episode Adam and Brian explain how they have built a platform designed to empower everyone in the business to take part in designing and building ML projects, while managing the end-to-end workflow required to go from data to production.AnnouncementsHello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.Predibase is a low-code ML platform without low-code limits. Built on top of our open source foundations of Ludwig and Horovod, our platform allows you to train state-of-the-art ML and deep learning models on your datasets at scale. Our platform works on text, images, tabular, audio and multi-modal data using our novel compositional model architecture. We allow users to operationalize models on top of the modern data stack, through REST and PQL – an extension of SQL that puts predictive power in the hands of data practitioners. Go to themachinelearningpodcast.com/predibase today to learn more and try it out!Building good ML models is hard, but testing them properly is even harder. At Deepchecks, they built an open-source testing framework that follows best practices, ensuring that your models behave as expected. Get started quickly using their built-in library of checks for testing and validating your model’s behavior and performance, and extend it to meet your specific needs as your model evolves. Accelerate your machine learning projects by building trust in your models and automating the testing that you used to do manually. Go to themachinelearningpodcast.com/deepchecks today to get started!Your host is Tobias Macey and today I’m interviewing Brian Calvert and Adam Oliner about Graft, a cloud-native platform designed to simplify the work of applying AI to business problemsInterviewIntroductionHow did you get involved in machine learning?Can you describe what Graft is and the story behind it?What is the core thesis of the problem you are targeting? How does the Graft product address that problem?Who are the personas that you are focused on working with both now in your early stages and in the future as you evolve the product?What are the capabilities that can be unlocked in different organizations by reducing the friction and up-front investment required to adopt ML/AI? What are the user-facing interfaces that you are focused on providing to make that adoption curve as shallow as possible? What are some of the unavoidable bits of complexity that need to be surfaced to the end user?Can you describe the infrastructure and platform design that you are relying on for the Graft product? What are some of the emerging "best practices" around ML/AI that you have been able to build on top of? As new techniques and practices are discovered/introduced how are you thinking about the adoption process and how/when to integrate them into the Graft product?What are some of the new engineering challenges that you have had to tackle as a result of your specific product?Machine learning can be a very data and compute intensive endeavor. How are you thinking about scalability in a multi-tenant system? Different model and data types can be widely divergent in terms of the cost (monetary, time, compute, etc.) required. How are you thinking about amortizing vs. passing through those costs to the end user?Can you describe the adoption/integration process for someone using Graft? Once they are onboarded and they have connected to their various data sources, what is the workflow for someone to apply ML capabilities to their problems?One of the challenges about the current state of ML capabilities and adoption is understanding what is possible and what is impractical. How have you designed Graft to help identify and expose opportunities for applying ML within the organization?What are some of the challenges of customer education and overall messaging that you are working through?What are the most interesting, innovative, or unexpected ways that you have seen Graft used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Graft?When is Graft the wrong choice?What do you have planned for the future of Graft?Contact InfoBrian LinkedInAdam LinkedInParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workersLinksGraftHigh Energy Particle PhysicsLHCCruiseSlackSplunkMarvin MinskyPatrick Henry WinstonAI WinterSebastian ThrunDARPA Grand ChallengeHigss BosonSupersymmetryKinematicsTransfer LearningFoundation ModelsML EmbeddingsBERTAirflowDagsterPrefectDaskKubeflowMySQLPostgreSQLSnowflakeRedshiftS3KubernetesMulti-modal modelsMulti-task modelsMagic: The GatheringThe intro and outro music is from Hitman’s Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/[CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/?utm_source=rss&utm_medium=rss
undefined
Aug 6, 2022 • 51min

Accelerate Development And Delivery Of Your Machine Learning Projects With A Comprehensive Feature Platform

SummaryIn order for a machine learning model to build connections and context across the data that is fed into it the raw data needs to be engineered into semantic features. This is a process that can be tedious and full of toil, requiring constant upkeep and often leading to rework across projects and teams. In order to reduce the amount of wasted effort and speed up experimentation and training iterations a new generation of services are being developed. Tecton first built a feature store to serve as a central repository of engineered features and keep them up to date for training and inference. Since then they have expanded the set of tools and services to be a full-fledged feature platform. In this episode Kevin Stumpf explains the different capabilities and activities related to features that are necessary to maintain velocity in your machine learning projects.AnnouncementsHello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.Building good ML models is hard, but testing them properly is even harder. At Deepchecks, they built an open-source testing framework that follows best practices, ensuring that your models behave as expected. Get started quickly using their built-in library of checks for testing and validating your model’s behavior and performance, and extend it to meet your specific needs as your model evolves. Accelerate your machine learning projects by building trust in your models and automating the testing that you used to do manually. Go to themachinelearningpodcast.com/deepchecks today to get started!Do you wish you could use artificial intelligence to drive your business the way Big Tech does, but don’t have a money printer? Graft is a cloud-native platform that aims to make the AI of the 1% accessible to the 99%. Wield the most advanced techniques for unlocking the value of data, including text, images, video, audio, and graphs. No machine learning skills required, no team to hire, and no infrastructure to build or maintain. For more information on Graft or to schedule a demo, visit themachinelearningpodcast.com/graft today and tell them Tobias sent you.Data powers machine learning, but poor data quality is the largest impediment to effective ML today. Galileo is a collaborative data bench for data scientists building Natural Language Processing (NLP) models to programmatically inspect, fix and track their data across the ML workflow (pre-training, post-training and post-production) – no more excel sheets or ad-hoc python scripts. Get meaningful gains in your model performance fast, dramatically reduce data labeling and procurement costs, while seeing 10x faster ML iterations. Galileo is offering listeners a free 30 day trial and a 30% discount on the product there after. This offer is available until Aug 31, so go to themachinelearningpodcast.com/galileo and request a demo today!Your host is Tobias Macey and today I’m interviewing Kevin Stumpf about the role of feature platforms in your ML engineering workflowInterviewIntroductionHow did you get involved in machine learning?Can you describe what you mean by the term "feature platform"? What are the components and supporting capabilities that are needed for such a platform?How does the availability of engineered features impact the ability of an organization to put ML into production?What are the points of friction that teams encounter when trying to build and maintain ML projects in the absence of a fully integrated feature platform?Who are the target personas for the Tecton platform? What stages of the ML lifecycle does it address?Can you describe how you have designed the Tecton feature platform? How have the goals and capabilities of the product evolved since you started working on it?What is the workflow for an ML engineer or data scientist to build and maintain features and use them in the model development workflow?What are the responsibilities of the MLOps stack that you have intentionally decided not to address? What are the interfaces and extension points that you offer for integrating with the other utilities needed to manage a full ML system?You wrote a post about the need to establish a DevOps approach to ML data. In keeping with that theme, can you describe how to think about the approach to testing and validation techniques for features and their outputs?What are the most interesting, innovative, or unexpected ways that you have seen Tecton/Feast used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Tecton?When is Tecton the wrong choice?What do you have planned for the future of the Tecton feature platform?Contact InfoLinkedIn@kevinmstumpf on Twitterkevinstumpf on GitHubParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?LinksTectonData Engineering Podcast EpisodeUber MichaelangeloFeature StoreSnowflakeData Engineering Podcast EpisodeDynamoDBTrain/Serve SkewLambda ArchitectureRedisThe intro and outro music is from Hitman’s Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/[CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/?utm_source=rss&utm_medium=rss
undefined
Jul 29, 2022 • 54min

Build Better Models Through Data Centric Machine Learning Development With Snorkel AI

The podcast discusses the challenges of data-centric machine learning development and how Snorkel AI's platform reduces the time and cost of building training datasets. They explore the concept of dark data, the complexity of working with different data types, and the limitations of Snorkel AI. The podcast also covers the transition from research to building a business, the biggest barrier to machine learning adoption, and the importance of properly handling data in enabling machine learning applications.
undefined
Jul 21, 2022 • 1h

Declarative Machine Learning For High Performance Deep Learning Models With Predibase

SummaryDeep learning is a revolutionary category of machine learning that accelerates our ability to build powerful inference models. Along with that power comes a great deal of complexity in determining what neural architectures are best suited to a given task, engineering features, scaling computation, etc. Predibase is building on the successes of the Ludwig framework for declarative deep learning and Horovod for horizontally distributing model training. In this episode CTO and co-founder of Predibase, Travis Addair, explains how they are reducing the burden of model development even further with their managed service for declarative and low-code ML and how they are integrating with the growing ecosystem of solutions for the full ML lifecycle.AnnouncementsHello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.Building good ML models is hard, but testing them properly is even harder. At Deepchecks, they built an open-source testing framework that follows best practices, ensuring that your models behave as expected. Get started quickly using their built-in library of checks for testing and validating your model’s behavior and performance, and extend it to meet your specific needs as your model evolves. Accelerate your machine learning projects by building trust in your models and automating the testing that you used to do manually. Go to themachinelearningpodcast.com/deepchecks today to get started!Data powers machine learning, but poor data quality is the largest impediment to effective ML today. Galileo is a collaborative data bench for data scientists building Natural Language Processing (NLP) models to programmatically inspect, fix and track their data across the ML workflow (pre-training, post-training and post-production) – no more excel sheets or ad-hoc python scripts. Get meaningful gains in your model performance fast, dramatically reduce data labeling and procurement costs, while seeing 10x faster ML iterations. Galileo is offering listeners a free 30 day trial and a 30% discount on the product there after. This offer is available until Aug 31, so go to themachinelearningpodcast.com/galileo and request a demo today!Do you wish you could use artificial intelligence to drive your business the way Big Tech does, but don’t have a money printer? Graft is a cloud-native platform that aims to make the AI of the 1% accessible to the 99%. Wield the most advanced techniques for unlocking the value of data, including text, images, video, audio, and graphs. No machine learning skills required, no team to hire, and no infrastructure to build or maintain. For more information on Graft or to schedule a demo, visit themachinelearningpodcast.com/graft today and tell them Tobias sent you.Your host is Tobias Macey and today I’m interviewing Travis Addair about Predibase, a low-code platform for building ML models in a declarative formatInterviewIntroductionHow did you get involved in machine learning?Can you describe what Predibase is and the story behind it?Who is your target audience and how does that focus influence your user experience and feature development priorities?How would you describe the semantic differences between your chosen terminology of "declarative ML" and the "autoML" nomenclature that many projects and products have adopted? Another platform that launched recently with a promise of "declarative ML" is Continual. How would you characterize your relative strengths?Can you describe how the Predibase platform is implemented? How have the design and goals of the product changed as you worked through the initial implementation and started working with early customers?The operational aspects of the ML lifecycle are still fairly nascent. How have you thought about the boundaries for your product to avoid getting drawn into scope creep while providing a happy path to delivery?Ludwig is a core element of your platform. What are the other capabilities that you are layering around and on top of it to build a differentiated product?In addition to the existing interfaces for Ludwig you created a new language in the form of PQL. What was the motivation for that decision? How did you approach the semantic and syntactic design of the dialect?What is your vision for PQL in the space of "declarative ML" that you are working to define?Can you describe the available workflows for an individual or team that is using Predibase for prototyping and validating an ML model? Once a model has been deemed satisfactory, what is the path to production?How are you approaching governance and sustainability of Ludwig and Horovod while balancing your reliance on them in Predibase?What are some of the notable investments/improvements that you have made in Ludwig during your work of building Predibase?What are the most interesting, innovative, or unexpected ways that you have seen Predibase used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Predibase?When is Predibase the wrong choice?What do you have planned for the future of Predibase?Contact InfoLinkedIntgaddair on GitHub@travisaddair on TwitterParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workersLinksPredibaseHorovodLudwigPodcast.__init__ EpisodeSupport Vector MachineHadoopTensorflowUber MichaelangeloAutoMLSpark ML LibDeep LearningPyTorchContinualData Engineering Podcast EpisodeOvertonKubernetesRayNvidia TritonWhylogsData Engineering Podcast EpisodeWeights and BiasesMLFlowCometConfusion MatricesdbtData Engineering Podcast EpisodeTorchscriptSelf-supervised LearningThe intro and outro music is from Hitman’s Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
undefined
Jul 14, 2022 • 47min

Stop Feeding Garbage Data To Your ML Models, Clean It Up With Galileo

SummaryMachine learning is a force multiplier that can generate an outsized impact on your organization. Unfortunately, if you are feeding your ML model garbage data, then you will get orders of magnitude more garbage out of it. The team behind Galileo experienced that pain for themselves and have set out to make data management and cleaning for machine learning a first class concern in your workflow. In this episode Vikram Chatterji shares the story of how Galileo got started and how you can use their platform to fix your ML data so that you can get back to the fun parts.AnnouncementsHello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.Predibase is a low-code ML platform without low-code limits. Built on top of our open source foundations of Ludwig and Horovod, our platform allows you to train state-of-the-art ML and deep learning models on your datasets at scale. Our platform works on text, images, tabular, audio and multi-modal data using our novel compositional model architecture. We allow users to operationalize models on top of the modern data stack, through REST and PQL – an extension of SQL that puts predictive power in the hands of data practitioners. Go to themachinelearningpodcast.com/predibase today to learn more and try it out!Do you wish you could use artificial intelligence to drive your business the way Big Tech does, but don’t have a money printer? Graft is a cloud-native platform that aims to make the AI of the 1% accessible to the 99%. Wield the most advanced techniques for unlocking the value of data, including text, images, video, audio, and graphs. No machine learning skills required, no team to hire, and no infrastructure to build or maintain. For more information on Graft or to schedule a demo, visit themachinelearningpodcast.com/graft today and tell them Tobias sent you.Building good ML models is hard, but testing them properly is even harder. At Deepchecks, they built an open-source testing framework that follows best practices, ensuring that your models behave as expected. Get started quickly using their built-in library of checks for testing and validating your model’s behavior and performance, and extend it to meet your specific needs as your model evolves. Accelerate your machine learning projects by building trust in your models and automating the testing that you used to do manually. Go to themachinelearningpodcast.com/deepchecks today to get started!Your host is Tobias Macey and today I’m interviewing Vikram Chatterji about Galileo, a platform for uncovering and addressing data problems to improve your model qualityInterviewIntroductionHow did you get involved in machine learning?Can you describe what Galileo is and the story behind it?Who are the target users of the platform and what are the tools/workflows that you are replacing? How does that focus inform and influence the design and prioritization of features in the platform?What are some of the real-world impacts that you have experienced as a result of the kinds of data problems that you are addressing with Galileo?Can you describe how the Galileo product is implemented? What are some of the assumptions that you had formed from your own experiences that have been challenged as you worked with early design partners?The toolchains and model architectures of any given team is unlikely to be a perfect match across departments or organizations. What are the core principles/concepts that you have hooked into in order to provide the broadest compatibility? What are the model types/frameworks/etc. that you have had to forego support for in the early versions of your product?Can you describe the workflow for someone building a machine learning model and how Galileo fits across the various stages of that cycle? What are some of the biggest difficulties posed by the non-linear nature of the experimentation cycle in model development?What are some of the ways that you work to quantify the impact of your tool on the productivity and profit contributions of an ML team/organization?What are the most interesting, innovative, or unexpected ways that you have seen Galileo used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Galileo?When is Galileo the wrong choice?What do you have planned for the future of Galileo?Contact InfoLinkedInParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workersLinksGalileoF1 ScoreTensorflowKerasSpaCyPodcast.__init__ EpisodePytorchPodcast.__init__ EpisodeMXNetJaxThe intro and outro music is from Hitman’s Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
undefined
Jul 6, 2022 • 49min

Build Better Machine Learning Models With Confidence By Adding Validation With Deepchecks

Learn how Deepchecks is addressing the challenges of testing and validating machine learning models with their open source library. Explore the importance of simple and deep checks in monitoring and finding unexpected issues. Discover the significance of documentation in open source projects and the need for appropriate tools and structures in data and model workflows. Hear about the challenges faced by teams in checking machine learning models and the future plans of Deepchecks.
undefined
Jun 29, 2022 • 46min

Build A Full Stack ML Powered App In An Afternoon With Baseten

SummaryBuilding an ML model is getting easier than ever, but it is still a challenge to get that model in front of the people that you built it for. Baseten is a platform that helps you quickly generate a full stack application powered by your model. You can easily create a web interface and APIs powered by the model you created, or a pre-trained model from their library. In this episode Tuhin Srivastava, co-founder of Basten, explains how the platform empowers data scientists and ML engineers to get their work in production without having to negotiate for help from their application development colleagues.AnnouncementsHello and welcome to the Machine Learning Podcast, the podcast about machine learning and how to bring it from idea to delivery.Data powers machine learning, but poor data quality is the largest impediment to effective ML today. Galileo is a collaborative data bench for data scientists building Natural Language Processing (NLP) models to programmatically inspect, fix and track their data across the ML workflow (pre-training, post-training and post-production) – no more excel sheets or ad-hoc python scripts. Get meaningful gains in your model performance fast, dramatically reduce data labeling and procurement costs, while seeing 10x faster ML iterations. Galileo is offering listeners a free 30 day trial and a 30% discount on the product there after. This offer is available until Aug 31, so go to themachinelearningpodcast.com/galileo and request a demo today!Do you wish you could use artificial intelligence to drive your business the way Big Tech does, but don’t have a money printer? Graft is a cloud-native platform that aims to make the AI of the 1% accessible to the 99%. Wield the most advanced techniques for unlocking the value of data, including text, images, video, audio, and graphs. No machine learning skills required, no team to hire, and no infrastructure to build or maintain. For more information on Graft or to schedule a demo, visit themachinelearningpodcast.com/graft today and tell them Tobias sent you.Predibase is a low-code ML platform without low-code limits. Built on top of our open source foundations of Ludwig and Horovod, our platform allows you to train state-of-the-art ML and deep learning models on your datasets at scale. Our platform works on text, images, tabular, audio and multi-modal data using our novel compositional model architecture. We allow users to operationalize models on top of the modern data stack, through REST and PQL – an extension of SQL that puts predictive power in the hands of data practitioners. Go to themachinelearningpodcast.com/predibase today to learn more and try it out!Your host is Tobias Macey and today I’m interviewing Tuhin Srivastava about Baseten, an ML Application Builder for data science and machine learning teamsInterviewIntroductionHow did you get involved in machine learning?Can you describe what Baseten is and the story behind it?Who are the target users for Baseten and what problems are you solving for them?What are some of the typical technical requirements for an application that is powered by a machine learning model? In the absence of Baseten, what are some of the common utilities/patterns that teams might rely on?What kinds of challenges do teams run into when serving a model in the context of an application?There are a number of projects that aim to reduce the overhead of turning a model into a usable product (e.g. Streamlit, Hex, etc.). What is your assessment of the current ecosystem for lowering the barrier to product development for ML and data science teams?Can you describe how the Baseten platform is designed? How have the design and goals of the project changed or evolved since you started working on it?How do you handle sandboxing of arbitrary user-managed code to ensure security and stability of the platform?How did you approach the system design to allow for mapping application development paradigms into a structure that was accessible to ML professionals?Can you describe the workflow for building an ML powered application?What types of models do you support? (e.g. NLP, computer vision, timeseries, deep neural nets vs. linear regression, etc.) How do the monitoring requirements shift for these different model types?What other challenges are presented by these different model types?What are the limitations in size/complexity/operational requirements that you have to impose to ensure a stable platform?What is the process for deploying model updates?For organizations that are relying on Baseten as a prototyping platform, what are the options for taking a successful application and handing it off to a product team for further customization?What are the most interesting, innovative, or unexpected ways that you have seen Baseten used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Baseten?When is Baseten the wrong choice?What do you have planned for the future of Baseten?Contact Info@tuhinone on TwitterLinkedInParting QuestionFrom your perspective, what is the biggest barrier to adoption of machine learning today?Closing AnnouncementsThank you for listening! Don’t forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@themachinelearningpodcast.com) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workersLinksBasetenGumroadscikit-learnTensorflowKerasStreamlitPodcast.__init__ EpisodeRetoolHexPodcast.__init__ EpisodeKubernetesReact MonacoHuggingfaceAirtableDall-E 2GPT-3Weights and BiasesThe intro and outro music is from Hitman’s Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
undefined
Jun 3, 2022 • 1min

Introducing The Show

Hello, and welcome to the Machine Learning Podcast. I’m your host, Tobias Macey. You might know me from the Data Engineering Podcast or the Python Podcast.__init__. If you work with machine learning and AI, or you’re curious about it and want to learn more, then this show is for you. We’ll go beyond the esoteric research and flashy headlines and find out how machine learning is making an impact on the world and creating value for business. Along the way we’ll be joined by the researchers, engineers, and entrepreneurs who are shaping the industry. So go to themachinelearningpodcast.com today to subscribe and stay informed on how ML/AI are being used, how it works, and how to go from idea to production.Support The Machine Learning Podcast

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode