
Data Engineering Podcast
Declarative Machine Learning Without The Operational Overhead Using Continual
Podcast summary created with Snipd AI
Quick takeaways
- Continual simplifies the process of connecting the data warehouse for analysis and model building.
- Users can define and manage custom feature sets based on their specific needs.
- Continual provides visibility into model performance and behavior, allowing users to monitor metrics and detect issues.
Deep dives
Connecting the Data Warehouse
The first step in onboarding to Continual is connecting the data warehouse where the user's data is stored. This is a simple process and allows Continual to access the data for analysis and model building.
Defining and Managing Features
After connecting the data warehouse, users can define and manage the features they want to use for predictive modeling. Feature sets are created and can be customized based on the specific needs of the user's use case.
Building Predictive Models
With the data and features in place, users can start building their predictive models. They define the target variable, such as churn or sales, and set the model training parameters. Continual then trains the model, profiles the data, and generates predictions.
Monitoring and Diagnostics
Continual provides visibility into the performance and behavior of the models. Users can monitor key metrics, such as precision, recall, and feature importance. They can also explore diagnostic tools to understand the factors driving the predictions and detect any issues or anomalies.
Declarative AI as a Tool for Predictive Modeling
Continual, a platform that focuses on declarative AI, utilizes familiar tools like SQL to manipulate and organize data for predictive modeling. The platform emphasizes the importance of leveraging existing languages and frameworks rather than reinventing them. They provide abstractions and structure to DBT models, making the process of registering and maintaining them easier. The goal is to enable data professionals to focus on defining business problems and understanding the underlying data and signals, while the platform handles the technical aspects of feature engineering and model training.
Challenges and Opportunities in Operationalizing ML
While the concept of declarative AI is gaining traction, the challenge lies in bridging the gap between ML experts and business professionals who may not have in-depth ML knowledge. The task of framing business problems as ML tasks, such as churn prediction, requires translation and understanding of the underlying data. Continual believes in the importance of data scientists focusing on defining the problem and identifying the relevant signals, while the platform handles the technical aspects of model development and performance monitoring. Additionally, Continual plans to focus on enhancing the development and production workflow for end users, with a strong emphasis on maintaining a declarative approach and a high level of control over the entire ML lifecycle.
Summary
Building, scaling, and maintaining the operational components of a machine learning workflow are all hard problems. Add the work of creating the model itself, and it’s not surprising that a majority of companies that could greatly benefit from machine learning have yet to either put it into production or see the value. Tristan Zajonc recognized the complexity that acts as a barrier to adoption and created the Continual platform in response. In this episode he shares his perspective on the benefits of declarative machine learning workflows as a means of accelerating adoption in businesses that don’t have the time, money, or ambition to build everything from scratch. He also discusses the technical underpinnings of what he is building and how using the data warehouse as a shared resource drastically shortens the time required to see value. This is a fascinating episode and Tristan’s work at Continual is likely to be the catalyst for a new stage in the machine learning community.
Announcements
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
- Schema changes, missing data, and volume anomalies caused by your data sources can happen without any advanced notice if you lack visibility into your data-in-motion. That leaves DataOps reactive to data quality issues and can make your consumers lose confidence in your data. By connecting to your pipeline orchestrator like Apache Airflow and centralizing your end-to-end metadata, Databand.ai lets you identify data quality issues and their root causes from a single dashboard. With Databand.ai, you’ll know whether the data moving from your sources to your warehouse will be available, accurate, and usable when it arrives. Go to dataengineeringpodcast.com/databand to sign up for a free 30-day trial of Databand.ai and take control of your data quality today.
- Atlan is a collaborative workspace for data-driven teams, like Github for engineering or Figma for design teams. By acting as a virtual hub for data assets ranging from tables and dashboards to SQL snippets & code, Atlan enables teams to create a single source of truth for all their data assets, and collaborate across the modern data stack through deep integrations with tools like Snowflake, Slack, Looker and more. Go to dataengineeringpodcast.com/atlan today and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $3000 on an annual subscription
- Your host is Tobias Macey and today I’m interviewing Tristan Zajonc about Continual, a platform for automating the creation and application of operational AI on top of your data warehouse
Interview
- Introduction
- How did you get involved in the area of data management?
- Can you describe what Continual is and the story behind it?
- What is your definition for "operational AI" and how does it differ from other applications of ML/AI?
- What are some example use cases for AI in an operational capacity?
- What are the barriers to adoption for organizations that want to take advantage of predictive analytics?
- Who are the target users of Continual?
- Can you describe how the Continual platform is implemented?
- How has the design and infrastructure changed or evolved since you first began working on it?
- What is the workflow for someone building a model and putting it into production?
- Once a model has been deployed, what are the mechanisms that you expose for interacting with it?
- How does this differ from in-database ML capabilities such as what is offered by Vertica and BigQuery?
- How much understanding of ML/AI principles is necessary for someone to create a model with Continual?
- What is your estimation of the impact that Continual can have on the overall productivity of a data team/data scientist?
- What are the most interesting, innovative, or unexpected ways that you have seen Continual used?
- What are the most interesting, unexpected, or challenging lessons that you have learned while working on Continual?
- When is Continual the wrong choice?
- What do you have planned for the future of Continual?
Contact Info
- @tristanzajonc on Twitter
Parting Question
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
- Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.
- Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
- If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com) with your story.
- To help other people find the show please leave a review on iTunes and tell your friends and co-workers
- Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Links
- Continual
- World Bank
- SAS
- SPSS
- Stata
- Feature Store
- DataRobot
- Transfer Learning
- dbt
- Ludwig
- Overton (Apple)
- Hightouch
- Census
- Galaxy Schema
- In-Database ML Podcast Episode
- scikit-learn
- Snorkel
- Materialize
- Flink SQL
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA