

Data Engineering Podcast
Tobias Macey
This show goes behind the scenes for the tools, techniques, and difficulties associated with the discipline of data engineering. Databases, workflows, automation, and data manipulation are just some of the topics that you will find here.
Episodes
Mentioned books

May 4, 2021 • 57min
The Grand Vision And Present Reality of DataOps
Summary
The Data industry is changing rapidly, and one of the most active areas of growth is automation of data workflows. Taking cues from the DevOps movement of the past decade data professionals are orienting around the concept of DataOps. More than just a collection of tools, there are a number of organizational and conceptual changes that a proper DataOps approach depends on. In this episode Kevin Stumpf, CTO of Tecton, Maxime Beauchemin, CEO of Preset, and Lior Gavish, CTO of Monte Carlo, discuss the grand vision and present realities of DataOps. They explain how to think about your data systems in a holistic and maintainable fashion, the security challenges that threaten to derail your efforts, and the power of using metadata as the foundation of everything that you do. If you are wondering how to get control of your data platforms and bring all of your stakeholders onto the same page then this conversation is for you.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Modern Data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days. Datafold helps Data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Go to dataengineeringpodcast.com/datafold today to start a 30-day trial of Datafold. Once you sign up and create an alert in Datafold for your company data, they will send you a cool water flask.
RudderStack’s smart customer data pipeline is warehouse-first. It builds your customer data warehouse and your identity graph on your data warehouse, with support for Snowflake, Google BigQuery, Amazon Redshift, and more. Their SDKs and plugins make event streaming easy, and their integrations with cloud applications like Salesforce and ZenDesk help you go beyond event streaming. With RudderStack you can use all of your customer data to answer more difficult questions and then send those insights to your whole customer data stack. Sign up free at dataengineeringpodcast.com/rudder today.
Your host is Tobias Macey and today I’m interviewing Max Beauchemin, Lior Gavish, and Kevin Stumpf about the real world challenges of embracing DataOps practices and systems, and how to keep things secure as you scale
Interview
Introduction
How did you get involved in the area of data management?
Before we get started, can you each give your definition of what "DataOps" means to you?
How does this differ from "business as usual" in the data industry?
What are some of the things that DataOps isn’t (despite what marketers might say)?
What are the biggest difficulties that you have faced in going from concept to production with a workflow or system intended to power self-serve access to other members of the organization?
What are the weak points in the current state of the industry, whether technological or social, that contribute to your greatest sense of unease from a security perspective?
As founders of companies that aim to facilitate adoption of various aspects of DataOps, how are you applying the products that you are building to your own internal systems?
How does security factor into the design of robust DataOps systems? What are some of the biggest challenges related to security when it comes to putting these systems into production?
What are the biggest differences between DevOps and DataOps, particularly when it concerns designing distributed systems?
What areas of the DataOps landscape do you think are ripe for innovation?
Nowadays, it seems like new DataOps companies are cropping up every day to try and solve some of these problems. Why do you think DataOps is becoming such an important component of the modern data stack?
There’s been a lot of conversation recently around the "rise of the data engineer" versus other roles in the data ecosystem (i.e. data scientist or data analyst). Why do you think that is?
What are some of the most valuable lessons that you have learned from working with your customers about how to apply DataOps principles?
What are some of the most interesting, unexpected, or challenging lessons that you have learned while building your respective platforms and businesses?
What are the industry trends that you are each keeping an eye on to inform you future product direction?
Contact Info
Kevin
LinkedIn
kevinstumpf on GitHub
@kevinstumpf on Twitter
Maxime
LinkedIn
@mistercrunch on Twitter
mistercrunch on GitHub
Lior
LinkedIn
@lgavish on Twitter
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Links
Tecton
Monte Carlo
Superset
Preset
Barracuda Networks
Feature Store
DataOps
DevOps
Data Catalog
Amundsen
OpenLineage
The Downfall of the Data Engineer
Hashicorp Vault
Reverse ELT
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

Apr 27, 2021 • 47min
Self Service Data Exploration And Dashboarding With Superset
Summary
The reason for collecting, cleaning, and organizing data is to make it usable by the organization. One of the most common and widely used methods of access is through a business intelligence dashboard. Superset is an open source option that has been gaining popularity due to its flexibility and extensible feature set. In this episode Maxime Beauchemin discusses how data engineers can use Superset to provide self service access to data and deliver analytics. He digs into how it integrates with your data stack, how you can extend it to fit your use case, and why open source systems are a good choice for your business intelligence. If you haven’t already tried out Superset then this conversation is well worth your time. Give it a listen and then take it for a test drive today.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Modern Data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days. Datafold helps Data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Go to dataengineeringpodcast.com/datafold today to start a 30-day trial of Datafold. Once you sign up and create an alert in Datafold for your company data, they will send you a cool water flask.
RudderStack’s smart customer data pipeline is warehouse-first. It builds your customer data warehouse and your identity graph on your data warehouse, with support for Snowflake, Google BigQuery, Amazon Redshift, and more. Their SDKs and plugins make event streaming easy, and their integrations with cloud applications like Salesforce and ZenDesk help you go beyond event streaming. With RudderStack you can use all of your customer data to answer more difficult questions and then send those insights to your whole customer data stack. Sign up free at dataengineeringpodcast.com/rudder today.
Your host is Tobias Macey and today I’m interviewing Max Beauchemin about Superset, an open source platform for data exploration, dashboards, and business intelligence
Interview
Introduction
How did you get involved in the area of data management?
Can you start by describing what Superset is?
Superset is becoming part of the reference architecture for a modern data stack. What are the factors that have contributed to its popularity over other tools such as Redash, Metabase, Looker, etc.?
Where do dashboarding and exploration tools like Superset fit in the responsibilities and workflow of a data engineer?
What are some of the challenges that Superset faces in being performant when working with large data sources?
Which data sources have you found to be the most challenging to work with?
What are some anti-patterns that users of Superset might run into when building out a dashboard?
What are some of the ways that users can surface data quality indicators (e.g. freshness, lineage, check results, etc.) in a Superset dashboard?
Another trend in analytics and dashboard tools is providing actionable insights. How can Superset support those use cases where a business user or analyst wants to perform an action based on the data that they are being shown?
How can Superset factor into a data governance strategy for the business?
What are some of the most interesting, innovative, or unexpected ways that you have seen Superset used?
dogfooding
What are the most interesting, unexpected, or challenging lessons that you have learned from working on Superset and founding Preset?
When is Superset the wrong choice?
What do you have planned for the future of Superset and Preset?
Contact Info
LinkedIn
@mistercrunch on Twitter
mistercrunch on GitHub
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Links
Superset
Podcast.__init__ Episode
Preset
ASP (Active Server Pages)
VBScript
Data Warehouse Institute
Ralph Kimball
Bill Inmon
Ubisoft
Hadoop
Tableau
Looker
Podcast Episode
The Future of Business Intelligence Is Open Source
Supercharging Apache Superset
Redash
Podcast.__init__ Episode
Metabase
Podcast Episode
The Rise Of The Data Engineer
AirBnB Data University
Python DBAPI
SQLAlchemy
Druid
SQL Common Table Expressions
SQL Window Functions
Data Warehouse Semantic Layer
Amundsen
Podcast Episode
Open Lineage
Datakin
Marquez
Podcast Episode
Apache Arrow
Podcast.__init__ Episode with Wes McKinney
Apache Parquet
DataHub
Podcast Episode
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

Apr 20, 2021 • 48min
Moving Machine Learning Into The Data Pipeline at Cherre
Summary
Most of the time when you think about a data pipeline or ETL job what comes to mind is a purely mechanistic progression of functions that move data from point A to point B. Sometimes, however, one of those transformations is actually a full-fledged machine learning project in its own right. In this episode Tal Galfsky explains how he and the team at Cherre tackled the problem of messy data for Addresses by building a natural language processing and entity resolution system that is served as an API to the rest of their pipelines. He discusses the myriad ways that addresses are incomplete, poorly formed, and just plain wrong, why it was a big enough pain point to invest in building an industrial strength solution for it, and how it actually works under the hood. After listening to this you’ll look at your data pipelines in a new light and start to wonder how you can bring more advanced strategies into the cleaning and transformation process.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Modern Data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days. Datafold helps Data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Go to dataengineeringpodcast.com/datafold today to start a 30-day trial of Datafold. Once you sign up and create an alert in Datafold for your company data, they will send you a cool water flask.
RudderStack’s smart customer data pipeline is warehouse-first. It builds your customer data warehouse and your identity graph on your data warehouse, with support for Snowflake, Google BigQuery, Amazon Redshift, and more. Their SDKs and plugins make event streaming easy, and their integrations with cloud applications like Salesforce and ZenDesk help you go beyond event streaming. With RudderStack you can use all of your customer data to answer more difficult questions and then send those insights to your whole customer data stack. Sign up free at dataengineeringpodcast.com/rudder today.
Your host is Tobias Macey and today I’m interviewing Tal Galfsky about how Cherre is bringing order to the messy problem of physical addresses and entity resolution in their data pipelines.
Interview
Introduction
How did you get involved in the area of data management?
Started as physicist and evolved into Data Science
Can you start by giving a brief recap of what Cherre is and the types of data that you deal with?
Cherre is a company that connects data
We’re not a data vendor, in that we don’t sell data, primarily
We help companies connect and make sense of their data
The real estate market is historically closed, gut let, behind on tech
What are the biggest challenges that you deal with in your role when working with real estate data?
Lack of a standard domain model in real estate.
Ontology. What is a property? Each data source, thinks about properties in a very different way. Therefore, yielding similar, but completely different data.
QUALITY (Even if the dataset are talking about the same thing, there are different levels of accuracy, freshness).
HIREARCHY. When is one source better than another
What are the teams and systems that rely on address information?
Any company that needs to clean or organize (make sense) their data, need to identify, people, companies, and properties.
Our clients use Address resolution in multiple ways. Via the UI or via an API. Our service is both external and internal so what I build has to be good enough for the demanding needs of our data science team, robust enough for our engineers, and simple enough that non-expert clients can use it.
Can you give an example for the problems involved in entity resolution
Known entity example.
Empire state buidling.
To resolve addresses in a way that makes sense for the client you need to capture the real world entities. Lots, buildings, units.
Identify the type of the object (lot, building, unit)
Tag the object with all the relevant addresses
Relations to other objects (lot, building, unit)
What are some examples of the kinds of edge cases or messiness that you encounter in addresses?
First class is string problems.
Second class component problems.
third class is geocoding.
I understand that you have developed a service for normalizing addresses and performing entity resolution to provide canonical references for downstream analyses. Can you give an overview of what is involved?
What is the need for the service. The main requirement here is connecting an address to lot, building, unit with latitude and longitude coordinates
How were you satisfying this requirement previously?
Before we built our model and dedicated service we had a basic prototype for pipeline only to handle NYC addresses.
What were the motivations for designing and implementing this as a service?
Need to expand nationwide and to deal with client queries in real time.
What are some of the other data sources that you rely on to be able to perform this normalization and resolution?
Lot data, building data, unit data, Footprints and address points datasets.
What challenges do you face in managing these other sources of information?
Accuracy, hirearchy, standardization, unified solution, persistant ids and primary keys
Digging into the specifics of your solution, can you talk through the full lifecycle of a request to resolve an address and the various manipulations that are performed on it?
String cleaning, Parse and tokenize, standardize, Match
What are some of the other pieces of information in your system that you would like to see addressed in a similar fashion?
Our named entity solution with connection to knowledge graph and owner unmasking.
What are some of the most interesting, unexpected, or challenging lessons that you learned while building this address resolution system?
Scaling nyc geocode example. The NYC model was exploding a subset of the options for messing up an address. Flexibility. Dependencies. Client exposure.
Now that you have this system running in production, if you were to start over today what would you do differently?
a lot but at this point the module boundaries and client interface are defined in such way that we are able to make changes or completely replace any given part of it without breaking anything client facing
What are some of the other projects that you are excited to work on going forward?
Named entity resolution and Knowledge Graph
Contact Info
LinkedIn
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
BigQuery is huge asset and in particular UDFs but they don’t support API calls or python script
Closing Announcements
Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Links
Cherre
Podcast Episode
Photonics
Knowledge Graph
Entity Resolution
BigQuery
NLP == Natural Language Processing
dbt
Podcast Episode
Airflow
Podcast.__init__ Episode
Datadog
Podcast Episode
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

Apr 13, 2021 • 1h 9min
Exploring The Expanding Landscape Of Data Professions with Josh Benamram of Databand
Summary
"Business as usual" is changing, with more companies investing in data as a first class concern. As a result, the data team is growing and introducing more specialized roles. In this episode Josh Benamram, CEO and co-founder of Databand, describes the motivations for these emerging roles, how these positions affect the team dynamics, and the types of visibility that they need into the data platform to do their jobs effectively. He also talks about how his experience working with these teams informs his work at Databand. If you are wondering how to apply your talents and interests to working with data then this episode is a must listen.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Modern Data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days. Datafold helps Data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Go to dataengineeringpodcast.com/datafold today to start a 30-day trial of Datafold. Once you sign up and create an alert in Datafold for your company data, they will send you a cool water flask.
RudderStack’s smart customer data pipeline is warehouse-first. It builds your customer data warehouse and your identity graph on your data warehouse, with support for Snowflake, Google BigQuery, Amazon Redshift, and more. Their SDKs and plugins make event streaming easy, and their integrations with cloud applications like Salesforce and ZenDesk help you go beyond event streaming. With RudderStack you can use all of your customer data to answer more difficult questions and then send those insights to your whole customer data stack. Sign up free at dataengineeringpodcast.com/rudder today.
Your host is Tobias Macey and today I’m interviewing Josh Benamram about the continued evolution of roles and responsibilities in data teams and their varied requirements for visibility into the data stack
Interview
Introduction
How did you get involved in the area of data management?
Can you start by discussing the set of roles that you see in a majority of data teams?
What new roles do you see emerging, and what are the motivating factors?
Which of the more established positions are fracturing or merging to create these new responsibilities?
What are the contexts in which you are seeing these role definitions used? (e.g. small teams, large orgs, etc.)
How do the increased granularity/specialization of responsibilities across data teams change the ways that data and platform architects need to think about technology investment?
What are the organizational impacts of these new types of data work?
How do these shifts in role definition change the ways that the individuals in the position interact with the data platform?
What are the types of questions that practitioners in different roles are asking of the data that they are working with? (e.g. what is the lineage of this asset vs. what is the distribution of values in this column, etc.)
How can metrics and observability data about pipelines and data systems help to support these various roles?
What are the different ways of measuring data quality for the needs of these roles?
How is the work you are doing at Databand informed by these changing needs?
One of the big challenges caused by data systems is the varying modes of access and interaction across the different stakeholders and activities. How can data platform teams and vendors help to surface useful metrics and information across these various interfaces without forcing users into a new or unfamiliar workflow?
What are some of the long-term impacts that you foresee in the data ecosystem and ways of interacting with data as a result of the current trend toward more specialized tasks?
As a vendor working to provide useful context to these practitioners what are some of the most interesting, unexpected, or challenging lessons that you have learned?
What do you have planned for the future of Databand?
Contact Info
Email
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Links
Databand
Website
Platform
Open Core
More data engineering stories & best practices
Atlassian
Chartio
Data Mesh Article
Podcast Episode
Grafana
Metabase
Superset
Podcast.__init__ Episode
Snowflake
Podcast Episode
Spark
Airflow
Podcast.__init__ Episode
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

Apr 6, 2021 • 58min
Put Your Whole Data Team On The Same Page With Atlan
Summary
One of the biggest obstacles to success in delivering data products is cross-team collaboration. Part of the problem is the difference in the information that each role requires to do their job and where they expect to find it. This introduces a barrier to communication that is difficult to overcome, particularly in teams that have not reached a significant level of maturity in their data journey. In this episode Prukalpa Sankar shares her experiences across multiple attempts at building a system that brings everyone onto the same page, ultimately bringing her to found Atlan. She explains how the design of the platform is informed by the needs of managing data projects for large and small teams across her previous roles, how it integrates with your existing systems, and how it can work to bring everyone onto the same page.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Modern Data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days. Datafold helps Data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Go to dataengineeringpodcast.com/datafold today to start a 30-day trial of Datafold. Once you sign up and create an alert in Datafold for your company data, they will send you a cool water flask.
RudderStack’s smart customer data pipeline is warehouse-first. It builds your customer data warehouse and your identity graph on your data warehouse, with support for Snowflake, Google BigQuery, Amazon Redshift, and more. Their SDKs and plugins make event streaming easy, and their integrations with cloud applications like Salesforce and ZenDesk help you go beyond event streaming. With RudderStack you can use all of your customer data to answer more difficult questions and then send those insights to your whole customer data stack. Sign up free at dataengineeringpodcast.com/rudder today.
Your host is Tobias Macey and today I’m interviewing Prukalpa Sankar about Atlan, a modern data workspace that makes collaboration among data stakeholders easier, increasing efficiency and agility in data projects
Interview
Introduction
How did you get involved in the area of data management?
Can you start by giving an overview of what you are building at Atlan and some of the story behind it?
Who are the target users of Atlan?
What portions of the data workflow is Atlan responsible for?
What components of the data stack might Atlan replace?
How would you characterize Atlan’s position in the current data ecosystem?
What makes Atlan stand out from other systems for data cataloguing, metadata management, or data governance?
What types of data assets (e.g. structured vs unstructured, textual vs binary, etc.) is Atlan designed to understand?
Can you talk through how Atlan is implemented?
How have the goals and design of the platform changed or evolved since you first began working on it?
What are some of the early assumptions that you have had to revisit or reconsider?
What is involved in getting Atlan deployed and integrated into an existing data platform?
Beyond the technical aspects, what are the business processes that teams need to implement to be successful when incorporating Atlan into their systems?
Once Atlan is set up, what is a typical workflow for an individual and their team to collaborate on a set of data assets, or building out a new processing pipeline?
What are some useful steps for introducing all of the stakeholders to the system and workflow?
What are the available extension points for managing data in systems that aren’t supported by Atlan out of the box?
What are some of the most interesting, innovative, or unexpected ways that you have seen Atlan used?
What are the most interesting, unexpected, or challenging lessons that you have learned while building Atlan?
When is Atlan the wrong choice?
What do you have planned for the future of the product?
Contact Info
LinkedIn
@prukalpa on Twitter
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Links
Atlan
India’s National Data Platform
World Economic Forum
UN
Gates Foundation
GitHub
Figma
Snowflake
Redshift
Databricks
DBT
Sisense
Looker
Apache Atlas
Immuta
DataHub
Datakin
Aapache Ranger
Great Expectations
Trino
Airflow
Dagster
Privacera
Databand
Cloudformation
Grafana
Deequ
We Failed to Set Up a Data Catalog 3x. Here’s Why.
Analysing the analysers book
OpenAPI
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

Mar 30, 2021 • 58min
Data Quality Management For The Whole Team With Soda Data
Summary
Data quality is on the top of everyone’s mind recently, but getting it right is as challenging as ever. One of the contributing factors is the number of people who are involved in the process and the potential impact on the business if something goes wrong. In this episode Maarten Masschelein and Tom Baeyens share the work they are doing at Soda to bring everyone on board to make your data clean and reliable. They explain how they started down the path of building a solution for managing data quality, their philosophy of how to empower data engineers with well engineered open source tools that integrate with the rest of the platform, and how to bring all of the stakeholders onto the same page to make your data great. There are many aspects of data quality management and it’s always a treat to learn from people who are dedicating their time and energy to solving it for everyone.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Modern Data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days. Datafold helps Data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Go to dataengineeringpodcast.com/datafold today to start a 30-day trial of Datafold. Once you sign up and create an alert in Datafold for your company data, they will send you a cool water flask.
RudderStack’s smart customer data pipeline is warehouse-first. It builds your customer data warehouse and your identity graph on your data warehouse, with support for Snowflake, Google BigQuery, Amazon Redshift, and more. Their SDKs and plugins make event streaming easy, and their integrations with cloud applications like Salesforce and ZenDesk help you go beyond event streaming. With RudderStack you can use all of your customer data to answer more difficult questions and then send those insights to your whole customer data stack. Sign up free at dataengineeringpodcast.com/rudder today.
Your host is Tobias Macey and today I’m interviewing Maarten Masschelein and Tom Baeyens about the work are doing at Soda to power data quality management
Interview
Introduction
How did you get involved in the area of data management?
Can you start by giving an overview of what you are building at Soda?
What problem are you trying to solve?
And how are you solving that problem?
What motivated you to start a business focused on data monitoring and data quality?
The data monitoring and broader data quality space is a segment of the industry that is seeing a huge increase in attention recently. Can you share your perspective on the current state of the ecosystem and how your approach compares to other tools and products?
who have you created Soda for (e.g platform engineers, data engineers, data product owners etc) and what is a typical workflow for each of them?
How do you go about integrating Soda into your data infrastructure?
How has the Soda platform been architected?
Why is this architecture important?
How have the goals and design of the system changed or evolved as you worked with early customers and iterated toward your current state?
What are some of the challenges associated with the ongoing monitoring and testing of data?
what are some of the tools or techniques for data testing used in conjunction with Soda?
What are some of the most interesting, innovative, or unexpected ways that you have seen Soda being used?
What are the most interesting, unexpected, or challenging lessons that you have learned while building the technology and business for Soda?
When is Soda the wrong choice?
What do you have planned for the future?
Contact Info
Maarten
LinkedIn
@masscheleinm on Twitter
Tom
LinkedIn
@tombaeyens on Twitter
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Links
Soda Data
Soda SQL
RedHat
Collibra
Spark
Getting Things Done by David Allen (affiliate link)
Slack
OpsGenie
DBT
Airflow
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

Mar 23, 2021 • 50min
Real World Change Data Capture At Datacoral
Summary
The world of business is becoming increasingly dependent on information that is accurate up to the minute. For analytical systems, the only way to provide this reliably is by implementing change data capture (CDC). Unfortunately, this is a non-trivial undertaking, particularly for teams that don’t have extensive experience working with streaming data and complex distributed systems. In this episode Raghu Murthy, founder and CEO of Datacoral, does a deep dive on how he and his team manage change data capture pipelines in production.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Modern Data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days. Datafold helps Data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Go to dataengineeringpodcast.com/datafold today to start a 30-day trial of Datafold. Once you sign up and create an alert in Datafold for your company data, they will send you a cool water flask.
RudderStack’s smart customer data pipeline is warehouse-first. It builds your customer data warehouse and your identity graph on your data warehouse, with support for Snowflake, Google BigQuery, Amazon Redshift, and more. Their SDKs and plugins make event streaming easy, and their integrations with cloud applications like Salesforce and ZenDesk help you go beyond event streaming. With RudderStack you can use all of your customer data to answer more difficult questions and then send those insights to your whole customer data stack. Sign up free at dataengineeringpodcast.com/rudder today.
Your host is Tobias Macey and today I’m interviewing Raghu Murthy about his recent work of making change data capture more accessible and maintainable
Interview
Introduction
How did you get involved in the area of data management?
Can you start by giving an overview of what CDC is and when it is useful?
What are the alternatives to CDC?
What are the cases where a more batch-oriented approach would be preferable?
What are the factors that you need to consider when deciding whether to implement a CDC system for a given data integration?
What are the barriers to entry?
What are some of the common mistakes or misconceptions about CDC that you have encountered in your own work and while working with customers?
How does CDC fit into a broader data platform, particularly where there are likely to be other data integration pipelines in operation? (e.g. Fivetran/Airbyte/Meltano/custom scripts)
What are the moving pieces in a CDC workflow that need to be considered as you are designing the system?
What are some examples of the configuration changes necessary in source systems to provide the needed log data?
How would you characterize the current landscape of tools available off the shelf for building a CDC pipeline?
What are your predictions about the potential for a unified abstraction layer for log-based CDC across databases?
What are some of the potential performance/uptime impacts on source databases, both during the initial historical sync and once you hit a steady state?
How can you mitigate the impacts of the CDC pipeline on the source databases?
What are some of the implementation details that application developers DBAs need to be aware of for data modeling in the source systems to allow for proper replication via CDC?
Are there any performance challenges that need to be addressed in the consumers or destination systems? e.g. parallelism
Can you describe the technical implementation and architecture that you use for implementing CDC?
How has the design evolved as you have grown the scale and sophistication of your system?
In the destination system, what data modeling decisions need to be made to ensure that the replicated information is usable for anlytics?
What additional attributes need to be added to track things like row modifications, deletions, schema changes, etc.?
How do you approach treatment of data copies in the DWH? (e.g. ELT – keep all source tables and use DBT for converting relevant tables into star/snowflake/data vault/wide tables)
What are your thoughts on the viability of a data lake as the destination system? (e.g. S3/Parquet or Trino/Drill/etc.)
CDC is a topic that is generally reserved for coversations about databases, but what are some of the other systems that we could think about implementing CDC? e.g. APIs and third party data sources
How can we integrage CDC into metadata/lineage tooling?
How do you handle observability of CDC flows?
What is involved in debugging a replication flow?
How can we build data quality checks into CDC workflows?
What are some of the most interesting, innovative, or unexpected ways that you have seen CDC used?
What are the most interesting, unexpected, or challenging lessons that you have learned from digging deep into CDC implementation?
When is CDC the wrong choice?
What are some of the industry or technology trends around CDC that you are most excited by?
Contact Info
LinkedIn
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Links
DataCoral
Podcast Episode
DataCoral Blog
3 Steps To Build A Modern Data Stack
Change Data Capture: Overview
Hive
Hadoop
DBT
Podcast Episode
FiveTran
Podcast Episode
Change Data Capture
Metadata First Blog Post
Debezium
Podcast Episode
UUID == Universally Unique Identifier
Airflow
Oracle Goldengate
Parquet
Trino
AWS Lambda
Data Mesh
Podcast Episode
Enterprise Message Bus
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

Mar 16, 2021 • 46min
Managing The DoorDash Data Platform
Summary
The team at DoorDash has a complex set of optimization challenges to deal with using data that they collect from a multi-sided marketplace. In order to handle the volume and variety of information that they use to run and improve the business the data team has to build a platform that analysts and data scientists can use in a self-service manner. In this episode the head of data platform for DoorDash, Sudhir Tonse, discusses the technologies that they are using, the approach that they take to adding new systems, and how they think about priorities for what to support for the whole company vs what to leave as a specialized concern for a single team. This is a valuable look at how to manage a large and growing data platform with that supports a variety of teams with varied and evolving needs.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Modern Data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days. Datafold helps Data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Go to dataengineeringpodcast.com/datafold today to start a 30-day trial of Datafold. Once you sign up and create an alert in Datafold for your company data, they will send you a cool water flask.
RudderStack’s smart customer data pipeline is warehouse-first. It builds your customer data warehouse and your identity graph on your data warehouse, with support for Snowflake, Google BigQuery, Amazon Redshift, and more. Their SDKs and plugins make event streaming easy, and their integrations with cloud applications like Salesforce and ZenDesk help you go beyond event streaming. With RudderStack you can use all of your customer data to answer more difficult questions and then send those insights to your whole customer data stack. Sign up free at dataengineeringpodcast.com/rudder today.
Your host is Tobias Macey and today I’m interviewing Sudhir Tonse about how the team at DoorDash designed their data platform
Interview
Introduction
How did you get involved in the area of data management?
Can you start by giving a quick overview of what you do at DoorDash?
What are some of the ways that data is used to power the business?
How has the pandemic affected the scale and volatility of the data that you are working with?
Can you describe the type(s) of data that you are working with?
What are the primary sources of data that you collect?
What secondary or third party sources of information do you rely on?
Can you give an overview of the collection process for that data?
In selecting the technologies for the various components in your data stack, what are the primary factors that you consider when evaluating the build vs. buy decision?
In your recent post about how you are scaling the capabilities and capacity of your data platform you mentioned the concept of maintaining a "paved path" of supported technologies to simplify integration across teams. What are the technologies that you use and rely on for the "paved path"?
How are you managing quality and consistency of your data across its lifecycle?
What are some of the specific data quality solutions that you have integrated into the platform and "paved path"?
What are some of the technologies that were used early on at DoorDash that failed to keep up as the business scaled?
How do you manage the migration path for adopting new technologies or techniques?
In the same post you mentioned the tendency to allow for building point solutions before deciding whether to generalize a given use case into a generalized platform capability. Can you give some examples of cases where a point solution remains a one-off versus when it needs to be expanded into a widely used component?
How do you identify and tracking cost factors in the data platform?
What do you do with that information?
What is your approach for identifying and measuring useful OKRs (Objectives and Key Results)?
How do you quantify potentially subjective metrics such as reliability and quality?
How have you designed the organizational structure for your data teams?
What are the responsibilities and organizational interfaces for data engineers within the company?
How have the organizational structures/patterns shifted or changed at different levels of scale/maturity for the business?
What are some of the most interesting, useful, unexpected, or challenging lessons that you have learned during your time as a data professional at DoorDash?
What are some of the upcoming projects or changes that you anticipate in the near to medium future?
Contact Info
LinkedIn
@stonse on Twitter
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Links
How DoorDash is Scaling its Data Platform to Delight Customers and Meet our Growing Demand
DoorDash
Uber
Netscape
Netflix
Change Data Capture
Debezium
Podcast Episode
SnowflakeDB
Podcast Episode
Airflow
Podcast.__init__ Episode
Kafka
Flink
Podcast Episode
Pinot
GDPR
CCPA
Data Governance
AWS
LightGBM
XGBoost
Big Data Landscape
Kinesis
Kafka Connect
Cassandra
PostgreSQL
Podcast Episode
Amundsen
Podcast Episode
SQS
Feature Toggles
BigEye
Podcast Episode
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

Mar 9, 2021 • 52min
Leave Your Data Where It Is And Automate Feature Extraction With Molecula
Summary
A majority of the time spent in data engineering is copying data between systems to make the information available for different purposes. This introduces challenges such as keeping information synchronized, managing schema evolution, building transformations to match the expectations of the destination systems. H.O. Maycotte was faced with these same challenges but at a massive scale, leading him to question if there is a better way. After tasking some of his top engineers to consider the problem in a new light they created the Pilosa engine. In this episode H.O. explains how using Pilosa as the core he built the Molecula platform to eliminate the need to copy data between systems in able to make it accessible for analytical and machine learning purposes. He also discusses the challenges that he faces in helping potential users and customers understand the shift in thinking that this creates, and how the system is architected to make it possible. This is a fascinating conversation about what the future looks like when you revisit your assumptions about how systems are designed.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Modern Data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days. Datafold helps Data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Go to dataengineeringpodcast.com/datafold today to start a 30-day trial of Datafold. Once you sign up and create an alert in Datafold for your company data, they will send you a cool water flask.
RudderStack’s smart customer data pipeline is warehouse-first. It builds your customer data warehouse and your identity graph on your data warehouse, with support for Snowflake, Google BigQuery, Amazon Redshift, and more. Their SDKs and plugins make event streaming easy, and their integrations with cloud applications like Salesforce and ZenDesk help you go beyond event streaming. With RudderStack you can use all of your customer data to answer more difficult questions and then send those insights to your whole customer data stack. Sign up free at dataengineeringpodcast.com/rudder today.
Your host is Tobias Macey and today I’m interviewing H.O. Maycotte about Molecula, a cloud based feature store based on the open source Pilosa project
Interview
Introduction
How did you get involved in the area of data management?
Can you start by giving an overview of what you are building at Molecula and the story behind it?
What are the additional capabilities that Molecula offers on top of the open source Pilosa project?
What are the problems/use cases that Molecula solves for?
What are some of the technologies or architectural patterns that Molecula might replace in a companies data platform?
One of the use cases that is mentioned on the Molecula site is as a feature store for ML and AI. This is a category that has been seeing a lot of growth recently. Can you provide some context how Molecula fits in that market and how it compares to options such as Tecton, Iguazio, Feast, etc.?
What are the benefits of using a bitmap index for identifying and computing features?
Can you describe how the Molecula platform is architected?
How has the design and goal of Molecula changed or evolved since you first began working on it?
For someone who is using Molecula, can you describe the process of integrating it with their existing data sources?
Can you describe the internal data model of Pilosa/Molecula?
How should users think about data modeling and architecture as they are loading information into the platform?
Once a user has data in Pilosa, what are the available mechanisms for performing analyses or feature engineering?
What are some of the most underutilized or misunderstood capabilities of Molecula?
What are some of the most interesting, unexpected, or innovative ways that you have seen the Molecula platform used?
What are the most interesting, unexpected, or challenging lessons that you have learned from building and scaling Molecula?
When is Molecula the wrong choice?
What do you have planned for the future of the platform and business?
Contact Info
LinkedIn
@maycotte on Twitter
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Links
Molecula
Pilosa
Podcast Episode
The Social Dilemma
Feature Store
Cassandra
Elasticsearch
Podcast Episode
Druid
MongoDB
SwimOS
Podcast Episode
Kafka
Kafka Schema Registry
Podcast Episode
Homomorphic Encryption
Lucene
Solr
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

Mar 2, 2021 • 1h 6min
Bridging The Gap Between Machine Learning And Operations At Iguazio
Summary
The process of building and deploying machine learning projects requires a staggering number of systems and stakeholders to work in concert. In this episode Yaron Haviv, co-founder of Iguazio, discusses the complexities inherent to the process, as well as how he has worked to democratize the technologies necessary to make machine learning operations maintainable.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Modern Data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days. Datafold helps Data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Go to dataengineeringpodcast.com/datafold today to start a 30-day trial of Datafold. Once you sign up and create an alert in Datafold for your company data, they will send you a cool water flask.
RudderStack’s smart customer data pipeline is warehouse-first. It builds your customer data warehouse and your identity graph on your data warehouse, with support for Snowflake, Google BigQuery, Amazon Redshift, and more. Their SDKs and plugins make event streaming easy, and their integrations with cloud applications like Salesforce and ZenDesk help you go beyond event streaming. With RudderStack you can use all of your customer data to answer more difficult questions and then send those insights to your whole customer data stack. Sign up free at dataengineeringpodcast.com/rudder today.
Your host is Tobias Macey and today I’m interviewing Yaron Haviv about Iguazio, a platform for end to end automation of machine learning applications using MLOps principles.
Interview
Introduction
How did you get involved in the area of data science & analytics?
Can you start by giving an overview of what Iguazio is and the story of how it got started?
How would you characterize your target or typical customer?
What are the biggest challenges that you see around building production grade workflows for machine learning?
How does Iguazio help to address those complexities?
For customers who have already invested in the technical and organizational capacity for data science and data engineering, how does Iguazio integrate with their environments?
What are the responsibilities of a data engineer throughout the different stages of the lifecycle for a machine learning application?
Can you describe how the Iguazio platform is architected?
How has the design of the platform evolved since you first began working on it?
How have the industry best practices around bringing machine learning to production changed?
How do you approach testing/validation of machine learning applications and releasing them to production environments? (e.g. CI/CD)
Once a model is in production, what are the types and sources of information that you collect to monitor their performance?
What are the factors that contribute to model drift?
What are the remaining gaps in the tooling or processes available for managing the lifecycle of machine learning projects?
What are the most interesting, innovative, or unexpected ways that you have seen the Iguazio platform used?
What are the most interesting, unexpected, or challenging lessons that you have learned while building and scaling the Iguazio platform and business?
When is Iguazio the wrong choice?
What do you have planned for the future of the platform?
Contact Info
LinkedIn
@yaronhaviv on Twitter
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Links
Iguazio
MLOps
Oracle Exadata
SAP HANA
Mellanox
NVIDIA
Multi-Model Database
Nuclio
MLRun
Jupyter Notebook
Pandas
Scala
Feature Imputing
Feature Store
Parquet
Spark
Apache Flink
Podcast Episode
Apache Beam
NLP (Natural Language Processing)
Deep Learning
BERT
Airflow
Podcast.__init__ Episode
Dagster
Data Engineering Podcast Episode
Podcast.__init__ Episode
Kubeflow
Argo
AWS Step Functions
Presto/Trino
Podcast Episode
Dask
Podcast Episode
Hadoop
Sagemaker
Tecton
Podcast Episode
Seldon
DataRobot
RapidMiner
H2O.ai
Grafana
Storey
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast


