Datafold combines DataDiff and data lineage analysis to enhance visibility and impact analysis of code changes in data platforms.
Testing and validation skills for data practitioners are evolving, with DBT tests and tools like DataDiff and data lineage analysis aiding in achieving quality and correctness of data.
Deep dives
Data Fold's mission to automate testing for data and analytics engineers
Data Fold is focused on automating testing for data and analytics engineers by providing tools that help verify and validate the code written by data developers. Their goal is to ensure that data teams can ship high-quality data products faster. They achieve this by combining two technologies - DataDiff, an open-source tool for comparing tables and SQL queries, and data lineage analysis. DataDiff helps data developers preview the changes they make to DBT models, ensuring they are fully aware of the impact on the data produced. Data lineage analyzes metadata, logs, and integrates with BI tools to understand the dependencies within the data platform. By using these technologies together, DataFold helps data teams understand the impact of code changes on the entire data platform and enhances visibility during the code deployment process.
Challenges and roadblocks in data testing and validation
Testing in the data space has its challenges. One major challenge is the absence of a ground truth for accurate and complete data. However, the focus shifts to maintaining consistency over time. It becomes crucial to ensure that data accurately represents the business reality and remains consistent. The complexity of data environments and the sheer number of tables, columns, and changes make it difficult to manually test and validate data. Assertion tests and SQL queries are commonly used, but they cannot provide comprehensive coverage or handle the scale of data sets. Additionally, the testing maturity in the data space lags behind software engineering. Data practitioners are still finding the best ways to test and validate data products and pipelines.
Importance of testing and validation skills for data practitioners
The focus on testing and validation skills for data practitioners is still evolving. With the prevalence of DBT as a standard tool for data pipelines, DBT tests have become an important practice. DBT tests, such as assertion tests, referential integrity checks, and metric comparisons, ensure the quality and correctness of data. However, the complexity of data environments, especially with larger DBT projects, makes achieving full test coverage challenging. Currently, there is a need to rely on insights and intuition when choosing what to test and validate. To address this, tools like DataDiff and data lineage analysis provide visibility and impact analysis of code changes, helping data practitioners understand the effects on data across the entire data platform.
Building staging environments and automating testing with DBT projects
Staging environments and automation play a crucial role in testing and validating DBT projects. Building staging environments allows developers to test code changes before deployment. DBT, with its simplified management of environments, simplifies the creation of staging environments using variables and substitutions. The challenge is ensuring the representativeness of data used in staging environments. Techniques like using production data, using slim CI builds, or leveraging in-zero-copy cloning help optimize staging environment builds. Continuous integration (CI) processes are essential in automatically running tests on staging environments. CI pipelines can leverage DBT Cloud or standalone CI runners like GitHub Actions or Circle CI. By combining staging environments, automated testing, and CI processes, data teams can ensure the quality and reliability of their code changes.
Data engineering is all about building workflows, pipelines, systems, and interfaces to provide stable and reliable data. Your data can be stable and wrong, but then it isn't reliable. Confidence in your data is achieved through constant validation and testing. Datafold has invested a lot of time into integrating with the workflow of dbt projects to add early verification that the changes you are making are correct. In this episode Gleb Mezhanskiy shares some valuable advice and insights into how you can build reliable and well-tested data assets with dbt and data-diff.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
RudderStack helps you build a customer data platform on your warehouse or data lake. Instead of trapping data in a black box, they enable you to easily collect customer data from the entire stack and build an identity graph on your warehouse, giving you full visibility and control. Their SDKs make event streaming from any app or website easy, and their extensive library of integrations enable you to automatically send data to hundreds of downstream tools. Sign up free at dataengineeringpodcast.com/rudderstack
Your host is Tobias Macey and today I'm interviewing Gleb Mezhanskiy about how to test your dbt projects with Datafold
Interview
Introduction
How did you get involved in the area of data management?
Can you describe what Datafold is and what's new since we last spoke? (July 2021 and July 2022 about data-diff)
What are the roadblocks to data testing/validation that you see teams run into most often?
How does the tooling used contribute to/help address those roadblocks?
What are some of the error conditions/failure modes that data-diff can help identify in a dbt project?
What are some examples of tests that need to be implemented by the engineer?
In your experience working with data teams, what typically constitutes the "staging area" for a dbt project? (e.g. separate warehouse, namespaced tables, snowflake data copies, lakefs, etc.)
Given a dbt project that is well tested and has data-diff as part of the validation suite, what are the challenges that teams face in managing the feedback cycle of running those tests?
In application development there is the idea of the "testing pyramid", consisting of unit tests, integration tests, system tests, etc. What are the parallels to that in data projects?
What are the limitations of the data ecosystem that make testing a bigger challenge than it might otherwise be?
Beyond test execution, what are the other aspects of data health that need to be included in the development and deployment workflow of dbt projects? (e.g. freshness, time to delivery, etc.)
What are the most interesting, innovative, or unexpected ways that you have seen Datafold and/or data-diff used for testing dbt projects?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on dbt testing internally or with your customers?
When is Datafold/data-diff the wrong choice for dbt projects?
What do you have planned for the future of Datafold?
Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com) with your story.
To help other people find the show please leave a review on Apple Podcasts and tell your friends and co-workers
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?