
Data Engineering Podcast
Adding Anomaly Detection And Observability To Your dbt Projects Is Elementary
Podcast summary created with Snipd AI
Quick takeaways
- Incorporating observability in DBT projects enhances data validation, operational insights, and metadata utilization.
- Challenges in monitoring metrics and understanding failures in DBT projects require effective data observability implementation.
Deep dives
Data Lake Management and Analytics with Starburst and DAGSTER
Starburst powers petabyte-scale SQL analytics on the Data Lake, offering adaptability and flexibility on an open architecture. DAGSTER provides a cloud-native orchestrator for data pipelines with integrated lineage and observability. Users can get up and running in minutes with DAGSTER Cloud for serverless and hybrid deployments. Both platforms enhance data workflows and enable teams to efficiently manage data pipelines.
Observability in DBT-Oriented Workflows
Elementary CEO Mayad Salom discusses the importance of incorporating observability in DBT projects. They highlight the simplicity needed in observability tools for DBT workflows and address aspects like data validation, operational insights, and metadata utilization. Users utilize tools like dbt tests and external platforms for monitoring and enhancement, ensuring effective management of transformations and SQL contexts.
Challenges in Data Observability for DBT Projects
Users face challenges in monitoring metrics, understanding failures, and adopting DIY approaches for insights in DBT projects. They explore various methods such as parsing log files, adjusting orchestrator steps, and integrating with external tools. The dilemma between DBT Cloud and self-hosted CLI solutions impacts teams' visibility and scaling decisions, highlighting the need for effective data observability implementation.
Advance Observability Tools and Data Quality for DBT Workflows
Elementary's tool chain aids in automated data warehouse cleanup, cost analysis, and migration validation to enhance data quality. Users leverage the platform for monitoring trends and creating contextual insights to improve decision-making. Lessons learned emphasize the importance of user feedback, contextual understanding, and continued focus on empowering data professionals for effective data observability in the DBT ecosystem.
Summary
Working with data is a complicated process, with numerous chances for something to go wrong. Identifying and accounting for those errors is a critical piece of building trust in the organization that your data is accurate and up to date. While there are numerous products available to provide that visibility, they all have different technologies and workflows that they focus on. To bring observability to dbt projects the team at Elementary embedded themselves into the workflow. In this episode Maayan Salom explores the approach that she has taken to bring observability, enhanced testing capabilities, and anomaly detection into every step of the dbt developer experience.
Announcements
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- Data lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst powers petabyte-scale SQL analytics fast, at a fraction of the cost of traditional methods, so that you can meet all your data needs ranging from AI to data applications to complete analytics. Trusted by teams of all sizes, including Comcast and Doordash, Starburst is a data lake analytics platform that delivers the adaptability and flexibility a lakehouse ecosystem promises. And Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.
- Dagster offers a new approach to building and running data platforms and data pipelines. It is an open-source, cloud-native orchestrator for the whole development lifecycle, with integrated lineage and observability, a declarative programming model, and best-in-class testability. Your team can get up and running in minutes thanks to Dagster Cloud, an enterprise-class hosted solution that offers serverless and hybrid deployments, enhanced security, and on-demand ephemeral test deployments. Go to dataengineeringpodcast.com/dagster today to get started. Your first 30 days are free!
- This episode is brought to you by Datafold – a testing automation platform for data engineers that prevents data quality issues from entering every part of your data workflow, from migration to dbt deployment. Datafold has recently launched data replication testing, providing ongoing validation for source-to-target replication. Leverage Datafold's fast cross-database data diffing and Monitoring to test your replication pipelines automatically and continuously. Validate consistency between source and target at any scale, and receive alerts about any discrepancies. Learn more about Datafold by visiting dataengineeringpodcast.com/datafold.
- Your host is Tobias Macey and today I'm interviewing Maayan Salom about how to incorporate observability into a dbt-oriented workflow and how Elementary can help
Interview
- Introduction
- How did you get involved in the area of data management?
- Can you start by outlining what elements of observability are most relevant for dbt projects?
- What are some of the common ad-hoc/DIY methods that teams develop to acquire those insights?
- What are the challenges/shortcomings associated with those approaches?
- Over the past ~3 years there were numerous data observability systems/products created. What are some of the ways that the specifics of dbt workflows are not covered by those generalized tools?
- What are the insights that can be more easily generated by embedding into the dbt toolchain and development cycle?
- Can you describe what Elementary is and how it is designed to enhance the development and maintenance work in dbt projects?
- How is Elementary designed/implemented?
- How have the scope and goals of the project changed since you started working on it?
- What are the engineering challenges/frustrations that you have dealt with in the creation and evolution of Elementary?
- Can you talk us through the setup and workflow for teams adopting Elementary in their dbt projects?
- How does the incorporation of Elementary change the development habits of the teams who are using it?
- What are the most interesting, innovative, or unexpected ways that you have seen Elementary used?
- What are the most interesting, unexpected, or challenging lessons that you have learned while working on Elementary?
- When is Elementary the wrong choice?
- What do you have planned for the future of Elementary?
Contact Info
Parting Question
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
- Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning.
- Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
- If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com) with your story.
Links
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Sponsored By:
- Starburst:  This episode is brought to you by Starburst - a data lake analytics platform for data engineers who are battling to build and scale high quality data pipelines on the data lake. Powered by Trino, Starburst runs petabyte-scale SQL analytics fast at a fraction of the cost of traditional methods, helping you meet all your data needs ranging from AI/ML workloads to data applications to complete analytics. Trusted by the teams at Comcast and Doordash, Starburst delivers the adaptability and flexibility a lakehouse ecosystem promises, while providing a single point of access for your data and all your data governance allowing you to discover, transform, govern, and secure all in one place. Starburst does all of this on an open architecture with first-class support for Apache Iceberg, Delta Lake and Hudi, so you always maintain ownership of your data. Want to see Starburst in action? Try Starburst Galaxy today, the easiest and fastest way to get started using Trino, and get $500 of credits free. [dataengineeringpodcast.com/starburst](https://www.dataengineeringpodcast.com/starburst)
- Datafold:  This episode is brought to you by Datafold – a testing automation platform for data engineers that prevents data quality issues from entering every part of your data workflow, from migration to dbt deployment. Datafold has recently launched data replication testing, providing ongoing validation for source-to-target replication. Leverage Datafold's fast cross-database data diffing and Monitoring to test your replication pipelines automatically and continuously. Validate consistency between source and target at any scale, and receive alerts about any discrepancies. Learn more about Datafold by visiting https://get.datafold.com/replication-de-podcast.
- Dagster:  Data teams are tasked with helping organizations deliver on the premise of data, and with ML and AI maturing rapidly, expectations have never been this high. However data engineers are challenged by both technical complexity and organizational complexity, with heterogeneous technologies to adopt, multiple data disciplines converging, legacy systems to support, and costs to manage. Dagster is an open-source orchestration solution that helps data teams reign in this complexity and build data platforms that provide unparalleled observability, and testability, all while fostering collaboration across the enterprise. With enterprise-grade hosting on Dagster Cloud, you gain even more capabilities, adding cost management, security, and CI support to further boost your teams' productivity. Go to [dagster.io](https://dagster.io/lp/dagster-cloud-trial?source=data-eng-podcast) today to get your first 30 days free!