Summary
In recent years the traditional approach to building data warehouses has shifted from transforming records before loading, to transforming them afterwards. As a result, the tooling for those transformations needs to be reimagined. The data build tool (dbt) is designed to bring battle tested engineering practices to your analytics pipelines. By providing an opinionated set of best practices it simplifies collaboration and boosts confidence in your data teams. In this episode Drew Banin, creator of dbt, explains how it got started, how it is designed, and how you can start using it today to create reliable and well-tested reports in your favorite data warehouse.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. And for your machine learning workloads, they just announced dedicated CPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show!
Understanding how your customers are using your product is critical for businesses of any size. To make it easier for startups to focus on delivering useful features Segment offers a flexible and reliable data infrastructure for your customer analytics and custom events. You only need to maintain one integration to instrument your code and get a future-proof way to send data to over 250 services with the flip of a switch. Not only does it free up your engineers’ time, it lets your business users decide what data they want where. Go to dataengineeringpodcast.com/segmentio today to sign up for their startup plan and get $25,000 in Segment credits and $1 million in free software from marketing and analytics companies like AWS, Google, and Intercom. On top of that you’ll get access to Analytics Academy for the educational resources you need to become an expert in data analytics for measuring product-market fit.
You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Dataversity, and the Open Data Science Conference. Go to dataengineeringpodcast.com/conferences to learn more and take advantage of our partner discounts when you register.
Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Your host is Tobias Macey and today I’m interviewing Drew Banin about DBT, the Data Build Tool, a toolkit for building analytics the way that developers build applications
Interview
Introduction
How did you get involved in the area of data management?
Can you start by explaining what DBT is and your motivation for creating it?
Where does it fit in the overall landscape of data tools and the lifecycle of data in an analytics pipeline?
Can you talk through the workflow for someone using DBT?
One of the useful features of DBT for stability of analytics is the ability to write and execute tests. Can you explain how those are implemented?
The packaging capabilities are beneficial for enabling collaboration. Can you talk through how the packaging system is implemented?
Are these packages driven by Fishtown Analytics or the dbt community?
What are the limitations of modeling everything as a SELECT statement?
Making SQL code reusable is notoriously difficult. How does the Jinja templating of DBT address this issue and what are the shortcomings?
What are your thoughts on higher level approaches to SQL that compile down to the specific statements?
Can you explain how DBT is implemented and how the design has evolved since you first began working on it?
What are some of the features of DBT that are often overlooked which you find particularly useful?
What are some of the most interesting/unexpected/innovative ways that you have seen DBT used?
What are the additional features that the commercial version of DBT provides?
What are some of the most useful or challenging lessons that you have learned in the process of building and maintaining DBT?
When is it the wrong choice?
What do you have planned for the future of DBT?
Contact Info
Email
@drebanin on Twitter
drebanin on GitHub
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Links
DBT
Fishtown Analytics
8Tracks Internet Radio
Redshift
Magento
Stitch Data
Fivetran
Airflow
Business Intelligence
Jinja template language
BigQuery
Snowflake
Version Control
Git
Continuous Integration
Test Driven Development
Snowplow Analytics
Podcast Episode
dbt-utils
We Can Do Better Than SQL blog post from EdgeDB
EdgeDB
Looker LookML
Podcast Interview
Presto DB
Podcast Interview
Spark SQL
Hive
Azure SQL Data Warehouse
Data Warehouse
Data Lake
Data Council Conference
Slowly Changing Dimensions
dbt Archival
Mode Analytics
Periscope BI
dbt docs
dbt repository
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast