

Data Engineering Podcast
Tobias Macey
This show goes behind the scenes for the tools, techniques, and difficulties associated with the discipline of data engineering. Databases, workflows, automation, and data manipulation are just some of the topics that you will find here.
Episodes
Mentioned books

Jan 19, 2021 • 60min
Using Your Data Warehouse As The Source Of Truth For Customer Data With Hightouch
Summary
The data warehouse has become the central component of the modern data stack. Building on this pattern, the team at Hightouch have created a platform that synchronizes information about your customers out to third party systems for use by marketing and sales teams. In this episode Tejas Manohar explains the benefits of sourcing customer data from one location for all of your organization to use, the technical challenges of synchronizing the data to external systems with varying APIs, and the workflow for enabling self-service access to your customer data by your marketing teams. This is an interesting conversation about the importance of the data warehouse and how it can be used beyond just internal analytics.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $60 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Modern Data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days. Datafold helps Data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Go to dataengineeringpodcast.com/datafold today to start a 30-day trial of Datafold. Once you sign up and create an alert in Datafold for your company data, they will send you a cool water flask.
This episode of Data Engineering Podcast is sponsored by Datadog, a unified monitoring and analytics platform built for developers, IT operations teams, and businesses in the cloud age. Datadog provides customizable dashboards, log management, and machine-learning-based alerts in one fully-integrated platform so you can seamlessly navigate, pinpoint, and resolve performance issues in context. Monitor all your databases, cloud services, containers, and serverless functions in one place with Datadog’s 400+ vendor-backed integrations. If an outage occurs, Datadog provides seamless navigation between your logs, infrastructure metrics, and application traces in just a few clicks to minimize downtime. Try it yourself today by starting a free 14-day trial and receive a Datadog t-shirt after installing the agent. Go to dataengineeringpodcast.com/datadog today to see how you can enhance visibility into your stack with Datadog.
Your host is Tobias Macey and today I’m interviewing Tejas Manohar about Hightouch, a data platform that helps you sync your customer data from your data warehouse to your CRM, marketing, and support tools
Interview
Introduction
How did you get involved in the area of data management?
Can you start by giving an overview of what you are building at Hightouch and your motivation for creating it?
What are the main points of friction for teams who are trying to make use of customer data?
Where is Hightouch positioned in the ecosystem of customer data tools such as Segment, Mixpanel, Amplitude, etc.?
Who are the target users of Hightouch?
How has that influenced the design of the platform?
What are the baseline attributes necessary for Hightouch to populate downstream systems?
What are the data modeling considerations that users need to be aware of when sending data to other platforms?
Can you describe how Hightouch is architected?
How has the design of the platform evolved since you first began working on it?
What goals or assumptions did you have when you first began building Hightouch that have been modified or invalidated once you began working with customers?
Can you talk through the workflow of using Hightouch to propagate data to other platforms?
How do you keep data up to date between the warehouse and downstream systems?
What are the upstream systems that users need to have in place to make Hightouch a viable and effective tool?
What are the benefits of using the data warehouse as the source of truth for downstream services?
What are the trends in data warehousing that you are keeping a close eye on?
What are you most excited for?
Are there any that you find worrisome?
What are some of the most interesting, unexpected, or innovative ways that you have seen Hightouch used?
What are the most interesting, unexpected, or challenging lessons that you have learned while building Hightouch?
When is Hightouch the wrong choice?
What do you have planned for the future of the platform?
Contact Info
LinkedIn
@tejasmanohar on Twitter
tejasmanoharon GitHub
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Links
Hightouch
Segment
Podcast Episode
DBT
Podcast Episode
Looker
Podcast Episode
Change Data Capture
Podcast Episode
Database Trigger
Materialize
Podcast Episode
Flink
Podcast Episode
Zapier
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

Jan 11, 2021 • 58min
Enabling Version Controlled Data Collaboration With TerminusDB
Summary
As data professionals we have a number of tools available for storing, processing, and analyzing data. We also have tools for collaborating on software and analysis, but collaborating on data is still an underserved capability. Gavin Mendel-Gleason encountered this problem first hand while working on the Sesshat databank, leading him to create TerminusDB and TerminusHub. In this episode he explains how the TerminusDB system is architected to provide a versioned graph storage engine that allows for branching and merging of data sets, how that opens up new possibilities for individuals and teams to work together on building new data repositories. This is a fascinating conversation on the technical challenges involved, the opportunities that such as system provides, and the complexities inherent to building a successful business on open source.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $60 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Do you want to get better at Python? Now is an excellent time to take an online course. Whether you’re just learning Python or you’re looking for deep dives on topics like APIs, memory mangement, async and await, and more, our friends at Talk Python Training have a top-notch course for you. If you’re just getting started, be sure to check out the Python for Absolute Beginners course. It’s like the first year of computer science that you never took compressed into 10 fun hours of Python coding and problem solving. Go to dataengineeringpodcast.com/talkpython today and get 10% off the course that will help you find your next level. That’s dataengineeringpodcast.com/talkpython, and don’t forget to thank them for supporting the show.
You invest so much in your data infrastructure – you simply can’t afford to settle for unreliable data. Fortunately, there’s hope: in the same way that New Relic, DataDog, and other Application Performance Management solutions ensure reliable software and keep application downtime at bay, Monte Carlo solves the costly problem of broken data pipelines. Monte Carlo’s end-to-end Data Observability Platform monitors and alerts for data issues across your data warehouses, data lakes, ETL, and business intelligence. The platform uses machine learning to infer and learn your data, proactively identify data issues, assess its impact through lineage, and notify those who need to know before it impacts the business. By empowering data teams with end-to-end data reliability, Monte Carlo helps organizations save time, increase revenue, and restore trust in their data. Visit dataengineeringpodcast.com/montecarlo today to request a demo and see how Monte Carlo delivers data observability across your data infrastructure. The first 25 will receive a free, limited edition Monte Carlo hat!
Your host is Tobias Macey and today I’m interviewing Gavin Mendel-Gleason about TerminusDB, an open source model driven graph database for knowledge graph representation
Interview
Introduction
How did you get involved in the area of data management?
Can you start by describing what TerminusDB is and what motivated you to build it?
What are the use cases that TerminusDB and TerminusHub are designed for?
There are a number of different reasons and methods for versioning data, such as the work being done with Datomic, LakeFS, DVC, etc. Where does TerminusDB fit in relation to those and other data versioning systems that are available today?
Can you describe how TerminusDB is implemented?
How has the design changed or evolved since you first began working on it?
What was the decision process and design considerations that led you to choose Prolog as the implementation language?
One of the challenges that have faced other knowledge engines built around RDF is that of scale and performance. How are you addressing those difficulties in TerminusDB?
What are the scaling factors and limitations for TerminusDB? (e.g. volumes of data, clustering, etc.)
How does the use of RDF triples and JSON-LD impact the audience for TerminusDB?
How much overhead is incurred by maintaining a long history of changes for a database?
How do you handle garbage collection/compaction of versions?
How does the availability of branching and merging strategies change the approach that data teams take when working on a project?
What are the edge cases in merging and conflict resolution, and what tools does TerminusDB/TerminusHub provide for working through those situations?
What are some useful strategies that teams should be aware of for working effectively with collaborative datasets in TerminusDB?
Another interesting element of the TerminusDB platform is the query language. What did you use as inspiration for designing it and how much of a learning curve is involved?
What are some of the most interesting, innovative, or unexpected ways that you have seen TerminusDB used?
https://en.wikipedia.org/wiki/Semantic_Web-?utm_source=rss&utm_medium=rss What are the most interesting, unexpected, or challenging lessons that you have learned while building TerminusDB and TerminusHub?
When is TerminusDB the wrong choice?
What do you have planned for the future of the project?
Contact Info
@GavinMGleason on Twitter
LinkedIn
GavinMendelGleason on GitHub
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Links
TerminusDB
TerminusHub
Chem Informatics
Type Theory
Graph Database
Trinity College Dublin
Sesshat Databank analytics over civilizations in history
PostgreSQL
DGraph
Grakn
Neo4J
Datomic
LakeFS
DVC
Dolt
Persistent Succinct Data Structure
Currying
Prolog
WOQL TerminusDB query language
RDF
JSON-LD
Semantic Web
Property Graph
Hypergraph
Super Node
Bloom Filters
Data Curation
Podcast Episode
CRDT == Conflict-Free Replicated Data Types
Podcast Episode
SPARQL
Datalog
AST == Abstract Syntax Tree
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

22 snips
Jan 5, 2021 • 48min
Bringing Feature Stores and MLOps to the Enterprise at Tecton
Kevin Stumpf, Co-founder and CTO of Tecton, discusses the evolution of feature stores and their essential role in modern machine learning ops. He shares insights from his experience with Uber's Michelangelo platform and explains how Tecton simplifies feature creation for data scientists. Topics include the architecture of Tecton, the importance of observability in data management, and the challenges of integrating machine learning workflows. Stumpf also touches on the balance between open-source and enterprise solutions in the ever-evolving data landscape.

4 snips
Dec 28, 2020 • 34min
Off The Shelf Data Governance With Satori
Summary
One of the core responsibilities of data engineers is to manage the security of the information that they process. The team at Satori has a background in cybersecurity and they are using the lessons that they learned in that field to address the challenge of access control and auditing for data governance. In this episode co-founder and CTO Yoav Cohen explains how the Satori platform provides a proxy layer for your data, the challenges of managing security across disparate storage systems, and their approach to building a dynamic data catalog based on the records that your organization is actually using. This is an interesting conversation about the intersection of data and security and the lessons that can be learned in each direction.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $60 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Your host is Tobias Macey and today I’m interviewing Yoav Cohen about Satori, a data access service to monitor, classify and control access to sensitive data
Interview
Introduction
How did you get involved in the area of data management?
Can you start by describing what you have built at Satori?
What is the story behind the product and company?
How does Satori compare to other tools and products for managing access control and governance for data assets?
What are the biggest challenges that organizations face in establishing and enforcing policies for their data?
What are the main goals for the Satori product and what use cases does it enable?
Can you describe how the Satori platform is architected?
How has the design of the platform evolved since you first began working on it?
How have your experiences working in cyber security informed your approach to data governance?
How does the design of the Satori platform simplify technical aspects of data governance?
What aspects of governance do you delegate to other systems or platforms?
What elements of data infrastructure does Satori integrate with?
For someone who is adopting Satori, what is involved in getting it deployed and set up with their existing data platforms?
What do you see as being the most complex or underserved aspects of data governance?
How much of that complexity is inherent to the problem vs. being a result of how the industry has evolved?
What are some of the most interesting, innovative, or unexpected ways that you have seen the Satori platform used?
What are the most interesting, unexpected, or challenging lessons that you have learned while building Satori?
When is Satori the wrong choice?
What do you have planned for the future of the platform?
Contact Info
LinkedIn
@yoavcohen on Twitter
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Links
Satori
Data Governance
Data Masking
TLS == Transport Layer Security
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

Dec 21, 2020 • 54min
Low Friction Data Governance With Immuta
Summary
Data governance is a term that encompasses a wide range of responsibilities, both technical and process oriented. One of the more complex aspects is that of access control to the data assets that an organization is responsible for managing. The team at Immuta has built a platform that aims to tackle that problem in a flexible and maintainable fashion so that data teams can easily integrate authorization, data masking, and privacy enhancing technologies into their data infrastructure. In this episode Steve Touw and Stephen Bailey share what they have built at Immuta, how it is implemented, and how it streamlines the workflow for everyone involved in working with sensitive data. If you are starting down the path of implementing a data governance strategy then this episode will provide a great overview of what is involved.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
What are the pieces of advice that you wish you had received early in your career of data engineering? If you hand a book to a new data engineer, what wisdom would you add to it? I’m working with O’Reilly on a project to collect the 97 things that every data engineer should know, and I need your help. Go to dataengineeringpodcast.com/97things to add your voice and share your hard-earned expertise.
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $60 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Feature flagging is a simple concept that enables you to ship faster, test in production, and do easy rollbacks without redeploying code. Teams using feature flags release new software with less risk, and release more often. ConfigCat is a feature flag service that lets you easily add flags to your Python code, and 9 other platforms. By adopting ConfigCat you and your manager can track and toggle your feature flags from their visual dashboard without redeploying any code or configuration, including granular targeting rules. You can roll out new features to a subset or your users for beta testing or canary deployments. With their simple API, clear documentation, and pricing that is independent of your team size you can get your first feature flags added in minutes without breaking the bank. Go to dataengineeringpodcast.com/configcat today to get 35% off any paid plan with code DATAENGINEERING or try out their free forever plan.
You invest so much in your data infrastructure – you simply can’t afford to settle for unreliable data. Fortunately, there’s hope: in the same way that New Relic, DataDog, and other Application Performance Management solutions ensure reliable software and keep application downtime at bay, Monte Carlo solves the costly problem of broken data pipelines. Monte Carlo’s end-to-end Data Observability Platform monitors and alerts for data issues across your data warehouses, data lakes, ETL, and business intelligence. The platform uses machine learning to infer and learn your data, proactively identify data issues, assess its impact through lineage, and notify those who need to know before it impacts the business. By empowering data teams with end-to-end data reliability, Monte Carlo helps organizations save time, increase revenue, and restore trust in their data. Visit dataengineeringpodcast.com/montecarlo today to request a demo and see how Monte Carlo delivers data observability across your data infrastructure. The first 25 will receive a free, limited edition Monte Carlo hat!
Your host is Tobias Macey and today I’m interviewing Steve Touw and Stephen Bailey about Immuta and how they work to automate data governance
Interview
Introduction
How did you get involved in the area of data management?
Can you start by describing what you have built at Immuta and your motivation for starting the company?
What is data governance?
How much of data governance can be solved with technology and how much is a matter of process and communication?
What does the current landscape of data governance solutions look like?
What are the motivating factors that would lead someone to choose Immuta as a component of their data governance strategy?
How does Immuta integrate with the broader ecosystem of data tools and platforms?
What other workflows or activities are necessary outside of Immuta to ensure a comprehensive governance/compliance strategy?
What are some of the common blind spots when it comes to data governance?
How is the Immuta platform architected?
How have the design and goals of the system evolved since you first started building it?
What is involved in adopting Immuta for an existing data platform?
Once an organization has integrated Immuta, what are the workflows for the different stakeholders of the data?
What are the biggest challenges in automated discovery/identification of sensitive data?
How does the evolution of what qualifies as sensitive complicate those efforts?
How do you approach the challenge of providing a unified interface for access control and auditing across different systems (e.g. BigQuery, Snowflake, RedShift, etc.)?
What are the complexities that creep into data masking?
What are some alternatives for obfuscating and managing access to sensitive information?
How do you handle managing access control/masking/tagging for derived data sets?
What are some of the most interesting, unexpected, or challenging lessons that you have learned while building Immuta?
When is Immuta the wrong choice?
What do you have planned for the future of the platform and business?
Contact Info
Steve
LinkedIn
@steve_touw on Twitter
Stephen
LinkedIn
Website
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Links
Immuta
Data Governance
Data Catalog
Snowflake DB
Podcast Episode
Looker
Podcast Episode
Collibra
ABAC == Attribute Based Access Control
RBAC == Role Based Access Control
Paul Ohm: Broken Promises of Privacy
PET == Privacy Enhancing Technologies
K Anonymization
Differential Privacy
LDAP == Lightweight Directory Access Protocol
Active Directory
COVID Alliance
HIPAA
GDPR
CCPA
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

Dec 15, 2020 • 1h 5min
Building A Self Service Data Platform For Alternative Data Analytics At YipitData
Summary
As a data engineer you’re familiar with the process of collecting data from databases, customer data platforms, APIs, etc. At YipitData they rely on a variety of alternative data sources to inform investment decisions by hedge funds and businesses. In this episode Andrew Gross, Bobby Muldoon, and Anup Segu describe the self service data platform that they have built to allow data analysts to own the end-to-end delivery of data projects and how that has allowed them to scale their output. They share the journey that they went through to build a scalable and maintainable system for web scraping, how to make it reliable and resilient to errors, and the lessons that they learned in the process. This was a great conversation about real world experiences in building a successful data-oriented business.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
What are the pieces of advice that you wish you had received early in your career of data engineering? If you hand a book to a new data engineer, what wisdom would you add to it? I’m working with O’Reilly on a project to collect the 97 things that every data engineer should know, and I need your help. Go to dataengineeringpodcast.com/97things to add your voice and share your hard-earned expertise.
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $60 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Are you bogged down by having to manually manage data access controls, repeatedly move and copy data, and create audit reports to prove compliance? How much time could you save if those tasks were automated across your cloud platforms? Immuta is an automated data governance solution that enables safe and easy data analytics in the cloud. Our comprehensive data-level security, auditing and de-identification features eliminate the need for time-consuming manual processes and our focus on data and compliance team collaboration empowers you to deliver quick and valuable data analytics on the most sensitive data to unlock the full potential of your cloud data platforms. Learn how we streamline and accelerate manual processes to help you derive real results from your data at dataengineeringpodcast.com/immuta.
Today’s episode of the Data Engineering Podcast is sponsored by Datadog, a SaaS-based monitoring and analytics platform for cloud-scale infrastructure, applications, logs, and more. Datadog uses machine-learning based algorithms to detect errors and anomalies across your entire stack—which reduces the time it takes to detect and address outages and helps promote collaboration between Data Engineering, Operations, and the rest of the company. Go to dataengineeringpodcast.com/datadog today to start your free 14 day trial. If you start a trial and install Datadog’s agent, Datadog will send you a free T-shirt.
Your host is Tobias Macey and today I’m interviewing Andrew Gross, Bobby Muldoon, and Anup Segu about they are building pipelines at Yipit Data
Interview
Introduction
How did you get involved in the area of data management?
Can you start by giving an overview of what YipitData does?
What kinds of data sources and data assets are you working with?
What is the composition of your data teams and how are they structured?
Given the use of your data products in the financial sector how do you handle monitoring and alerting around data quality?
For web scraping in particular, given how fragile it can be, what have you done to make it a reliable and repeatable part of the data pipeline?
Can you describe how your data platform is implemented?
How has the design of your platform and its goals evolved or changed?
What is your guiding principle for providing an approachable interface to analysts?
How much knowledge do your analysts require about the guarantees offered, and edge cases to be aware of in the underlying data and its processing?
What are some examples of specific tools that you have built to empower your analysts to own the full lifecycle of the data that they are working with?
Can you characterize or quantify the benefits that you have seen from training the analysts to work with the engineering tool chain?
What have been some of the most interesting, unexpected, or surprising outcomes of how you are approaching the different responsibilities and levels of ownership in your data organization?
What are some of the most interesting, unexpected, or challenging lessons that you have learned from building out the platform, tooling, and organizational structure for creating data products at Yipit?
What advice or recommendations do you have for other leaders of data teams about how to think about the organizational and technical aspects of managing the lifecycle of data projects?
Contact Info
Andrew
LinkedIn
@awgross on Twitter
Bobby
LinkedIn
@TheDooner64
Anup
LinkedIn
anup-segu on GitHub
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Links
Yipit Data
Redshift
MySQL
Airflow
Databricks
Groupon
Living Social
Web Scraping
Podcast.__init__ Episode
Readypipe
Graphite
Podcast.init Episode
AWS Kinesis Firehose
Parquet
Papermill
Podcast Episode About Notebooks At Netflix
Fivetran
Podcast Episode
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

Dec 7, 2020 • 1h 13min
Proven Patterns For Building Successful Data Teams
Summary
Building data products are complicated by the fact that there are so many different stakeholders with competing goals and priorities. It is also challenging because of the number of roles and capabilities that are necessary to go from idea to delivery. Different organizations have tried a multitude of organizational strategies to improve the success rate of these data teams with varying levels of success. In this episode Jesse Anderson shares the lessons that he has learned while working with dozens of businesses across industries to determine the team structures and communication styles that have generated the best results. If you are struggling to deliver value from big data, or just starting down the path of building the organizational capacity to turn raw information into valuable products then this is a conversation that you don’t want to miss.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
What are the pieces of advice that you wish you had received early in your career of data engineering? If you hand a book to a new data engineer, what wisdom would you add to it? I’m working with O’Reilly on a project to collect the 97 things that every data engineer should know, and I need your help. Go to dataengineeringpodcast.com/97things to add your voice and share your hard-earned expertise.
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $60 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Are you bogged down by having to manually manage data access controls, repeatedly move and copy data, and create audit reports to prove compliance? How much time could you save if those tasks were automated across your cloud platforms? Immuta is an automated data governance solution that enables safe and easy data analytics in the cloud. Our comprehensive data-level security, auditing and de-identification features eliminate the need for time-consuming manual processes and our focus on data and compliance team collaboration empowers you to deliver quick and valuable data analytics on the most sensitive data to unlock the full potential of your cloud data platforms. Learn how we streamline and accelerate manual processes to help you derive real results from your data at dataengineeringpodcast.com/immuta.
Today’s episode of the Data Engineering Podcast is sponsored by Datadog, a SaaS-based monitoring and analytics platform for cloud-scale infrastructure, applications, logs, and more. Datadog uses machine-learning based algorithms to detect errors and anomalies across your entire stack—which reduces the time it takes to detect and address outages and helps promote collaboration between Data Engineering, Operations, and the rest of the company. Go to dataengineeringpodcast.com/datadog today to start your free 14 day trial. If you start a trial and install Datadog’s agent, Datadog will send you a free T-shirt.
Your host is Tobias Macey and today I’m interviewing Jesse Anderson about best practices for organizing and managing data teams
Interview
Introduction
How did you get involved in the area of data management?
Can you start by giving an overview of how you view the mission and responsibilities of a data team?
What are the critical elements of a successful data team?
Beyond the core pillars of data science, data engineering, and operations, what other specialized roles do you find helpful for larger or more sophisticated teams?
For organizations that have "small data", how does that change the necessary composition of roles for successful data projects?
What are the signs and symptoms that point to the need for a dedicated team that focuses on data?
With data scientists and data engineers in particular being in such high demand, what are strategies that you have found effective for attracting new talent?
In the case where you have engineers on staff, how do you identify internal talent that can be trained into these specialized roles?
Another challenge that organizations face in dealing with data is how the team is organized. What are your thoughts on effective strategies for how to structure the communication and reporting structures of data teams? (e.g. centralized, embedded, etc.)
How do you recommend evaluating potential candidates for each of the necessary roles?
What are your thoughts on when to hire an outside consultant, vs building internal capacity?
For managers who are responsible for data teams, how much understanding of data and analytics do they need to be effective?
How do you define success or measure performance of a team focused on working with data?
What are some of the anti-patterns that you have seen in managers who oversee data professionals?
What are some of the most interesting, unexpected, or challenging lessons that you have learned in the process of helping organizations and individuals achieve success in data and analytics?
What advice or additional resources do you have for anyone who is interested in learning more about how to build and grow a successful data team?
Contact Info
Website
@jessetanderson on Twitter
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Links
Data Teams Book
DBA == Database Administrator
ML Engineer
DataOps
Three Vs
The Ultimate Guide To Switching Careers To Big Data
S-1 Report
Jesse Anderson’s Youtube Channel
Video about interviewing for data teams
Uber Data Infrastructure Progression Blog Post
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

Nov 30, 2020 • 45min
Streaming Data Integration Without The Code at Equalum
Summary
The first stage of every good pipeline is to perform data integration. With the increasing pace of change and the need for up to date analytics the need to integrate that data in near real time is growing. With the improvements and increased variety of options for streaming data engines and improved tools for change data capture it is possible for data teams to make that goal a reality. However, despite all of the tools and managed distributions of those streaming engines it is still a challenge to build a robust and reliable pipeline for streaming data integration, especially if you need to expose those capabilities to non-engineers. In this episode Ido Friedman, CTO of Equalum, explains how they have built a no-code platform to make integration of streaming data and change data capture feeds easier to manage. He discusses the challenges that are inherent in the current state of CDC technologies, how they have architected their system to integrate well with existing data platforms, and how to build an appropriate level of abstraction for such a complex problem domain. If you are struggling with streaming data integration and change data capture then this interview is definitely worth a listen.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
What are the pieces of advice that you wish you had received early in your career of data engineering? If you hand a book to a new data engineer, what wisdom would you add to it? I’m working with O’Reilly on a project to collect the 97 things that every data engineer should know, and I need your help. Go to dataengineeringpodcast.com/97things to add your voice and share your hard-earned expertise.
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $60 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Modern Data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days. Datafold helps Data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Go to dataengineeringpodcast.com/datafold today to start a 30-day trial of Datafold. Once you sign up and create an alert in Datafold for your company data, they will send you a cool water flask.
Are you bogged down by having to manually manage data access controls, repeatedly move and copy data, and create audit reports to prove compliance? How much time could you save if those tasks were automated across your cloud platforms? Immuta is an automated data governance solution that enables safe and easy data analytics in the cloud. Our comprehensive data-level security, auditing and de-identification features eliminate the need for time-consuming manual processes and our focus on data and compliance team collaboration empowers you to deliver quick and valuable data analytics on the most sensitive data to unlock the full potential of your cloud data platforms. Learn how we streamline and accelerate manual processes to help you derive real results from your data at dataengineeringpodcast.com/immuta.
Your host is Tobias Macey and today I’m interviewing Ido Friedman about Equalum, a no-code platform for streaming data integration
Interview
Introduction
How did you get involved in the area of data management?
Can you start by giving an overview of what you are building at Equalum and how it got started?
There are a number of projects and platforms on the market that target data integration. Can you give some context of how Equalum fits in that market and the differentiating factors that engineers should consider?
What components of the data ecosystem might Equalum replace, and which are you designed to integrate with?
Can you walk through the workflow for someone who is using Equalum for a simple data integration use case?
What options are available for doing in-flight transformations of data or creating customized routing rules?
How do you handle versioning and staged rollouts of changes to pipelines?
How is the Equalum platform implemented?
How has the design and architecture of Equalum evolved since it was first created?
What have you found to be the most complex or challenging aspects of building the platform?
Change data capture is a growing area of interest, with a significant level of difficulty in implementing well. How do you handle support for the variety of different sources that customers are working with?
What are the edge cases that you typically run into when working with changes in databases?
How do you approach the user experience of the platform given its focus as a low code/no code system?
What options exist for sophisticated users to create custom operations?
How much of the underlying concerns do you surface to end users, and how much are you able to hide?
What is the process for a customer to integrate Equalum into their existing infrastructure and data systems?
What are some of the most interesting, unexpected, or innovative ways that you have seen Equalum used?
What are the most interesting, unexpected, or challenging lessons that you have learned while building and growing the Equalum platform?
When is Equalum the wrong choice?
What do you have planned for the future of Equalum?
Contact Info
LinkedIn
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Links
Equalum
Change Data Capture
Debezium Podcast Episode
SQL Server
DBA == Database Administrator
Fivetran
Podcast Episode
Singer
Pentaho
EMR
Snowflake
Podcast Episode
S3
Kafka
Spark
Prometheus
Grafana
Logminer
OBLP == Oracle Binary Log Parser
Ansible
Terraform
Jupyter Notebooks
Papermill
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

Nov 23, 2020 • 49min
Keeping A Bigeye On The Data Quality Market
Summary
One of the oldest aphorisms about data is "garbage in, garbage out", which is why the current boom in data quality solutions is no surprise. With the growth in projects, platforms, and services that aim to help you establish and maintain control of the health and reliability of your data pipelines it can be overwhelming to stay up to date with how they all compare. In this episode Egor Gryaznov, CTO of Bigeye, joins the show to explore the landscape of data quality companies, the general strategies that they are using, and what problems they solve. He also shares how his own product is designed and the challenges that are involved in building a system to help data engineers manage the complexity of a data platform. If you are wondering how to get better control of your own pipelines and the traps to avoid then this episode is definitely worth a listen.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
What are the pieces of advice that you wish you had received early in your career of data engineering? If you hand a book to a new data engineer, what wisdom would you add to it? I’m working with O’Reilly on a project to collect the 97 things that every data engineer should know, and I need your help. Go to dataengineeringpodcast.com/97things to add your voice and share your hard-earned expertise.
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $60 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Modern Data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days. Datafold helps Data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Go to dataengineeringpodcast.com/datafold today to start a 30-day trial of Datafold. Once you sign up and create an alert in Datafold for your company data, they will send you a cool water flask.
Are you bogged down by having to manually manage data access controls, repeatedly move and copy data, and create audit reports to prove compliance? How much time could you save if those tasks were automated across your cloud platforms? Immuta is an automated data governance solution that enables safe and easy data analytics in the cloud. Our comprehensive data-level security, auditing and de-identification features eliminate the need for time-consuming manual processes and our focus on data and compliance team collaboration empowers you to deliver quick and valuable data analytics on the most sensitive data to unlock the full potential of your cloud data platforms. Learn how we streamline and accelerate manual processes to help you derive real results from your data at dataengineeringpodcast.com/immuta.
Your host is Tobias Macey and today I’m interviewing Egor Gryaznov about the state of the industry for data quality management and what he is building at Bigeye.
Interview
Introduction
How did you get involved in the area of data management?
Can you start by sharing your views on what attributes you consider when defining data quality?
You use the term "data semantics" – can you elaborate on what that means?
What are the driving factors that contribute to the presence or lack of data quality in an organization or data platform?
Why do you think now is the right time to focus on data quality as an industry?
What are you building at Bigeye and how did it get started?
How does Bigeye help teams understand and manage their data quality?
What is the difference between existing data quality approaches and data observability?
What do you see as the tradeoffs for the approach that you are taking at Bigeye?
What are the most common data quality issues that you’ve seen and what are some more interesting ones that you wouldn’t expect?
Where do you see Bigeye fitting into the data management landscape? What are alternatives to Bigeye?
What are some of the most interesting, innovative, or unexpected ways that you have seen Bigeye being used?
What are some of the most interesting homegrown approaches that you have seen?
What have you found to be the most interesting, unexpected, or challenging lessons that you have learned while building the Bigeye platform and business?
What are the biggest trends you’re following in data quality management?
When is Bigeye the wrong choice?
What do you see in store for the future of Bigeye?
Contact Info
You can email Egor about anything data
LinkedIn
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Links
Bigeye
Uber
A/B Testing
Hadoop
MapReduce
Apache Impala
One King’s Lane
Vertica
Mode
Tableau
Jupyter Notebooks
Redshift
Snowflake
PyTorch
Podcast.__init__ Episode
Tensorflow
DataOps
DevOps
Data Catalog
DBT
Podcast Episode
SRE Handbook
Article About How Uber Applied SRE Principles to Data
SLA == Service Level Agreement
SLO == Service Level Objective
Dagster
Podcast Episode
Podcast.__init__ Episode
Delta Lake
Great Expectations
Podcast Episode
Podcast.__init__ Episode
Amundsen
Podcast Episode
Alation
Collibra
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

Nov 17, 2020 • 44min
Self Service Data Management From Ingest To Insights With Isima
Summary
The core mission of data engineers is to provide the business with a way to ask and answer questions of their data. This often takes the form of business intelligence dashboards, machine learning models, or APIs on top of a cleaned and curated data set. Despite the rapid progression of impressive tools and products built to fulfill this mission, it is still an uphill battle to tie everything together into a cohesive and reliable platform. At Isima they decided to reimagine the entire ecosystem from the ground up and built a single unified platform to allow end-to-end self service workflows from data ingestion through to analysis. In this episode CEO and co-founder of Isima Darshan Rawal explains how the biOS platform is architected to enable ease of use, the challenges that were involved in building an entirely new system from scratch, and how it can integrate with the rest of your data platform to allow for incremental adoption. This was an interesting and contrarian take on the current state of the data management industry and is worth a listen to gain some additional perspective.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
What are the pieces of advice that you wish you had received early in your career of data engineering? If you hand a book to a new data engineer, what wisdom would you add to it? I’m working with O’Reilly on a project to collect the 97 things that every data engineer should know, and I need your help. Go to dataengineeringpodcast.com/97things to add your voice and share your hard-earned expertise.
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $60 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Modern Data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days. Datafold helps Data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Follow go.datafold.com/dataengineeringpodcast to start a 30-day trial of Datafold. Once you sign up and create an alert in Datafold for your company data, they will send you a cool water flask.
Are you bogged down by having to manually manage data access controls, repeatedly move and copy data, and create audit reports to prove compliance? How much time could you save if those tasks were automated across your cloud platforms? Immuta is an automated data governance solution that enables safe and easy data analytics in the cloud. Our comprehensive data-level security, auditing and de-identification features eliminate the need for time-consuming manual processes and our focus on data and compliance team collaboration empowers you to deliver quick and valuable data analytics on the most sensitive data to unlock the full potential of your cloud data platforms. Learn how we streamline and accelerate manual processes to help you derive real results from your data at dataengineeringpodcast.com/immuta.
Your host is Tobias Macey and today I’m interviewing Darshan Rawal about Îsíma, a unified platform for building data applications
Interview
Introduction
How did you get involved in the area of data management?
Can you start by giving an overview of what you are building at Îsíma?
What was your motivation for creating a new platform for data applications?
What is the story behind the name?
What are the tradeoffs of a fully integrated platform vs a modular approach?
What components of the data ecosystem does Isima replace, and which does it integrate with?
What are the use cases that Isima enables which were previously impractical?
Can you describe how Isima is architected?
How has the design of the platform changed or evolved since you first began working on it?
What were your initial ideas or assumptions that have been changed or invalidated as you worked through the problem you’re addressing?
With a focus on the enterprise, how did you approach the user experience design to allow for organizational complexity?
One of the biggest areas of difficulty that many data systems face is security and scaleable access control. How do you tackle that problem in your platform?
How did you address the issue of geographical distribution of data and users?
Can you talk through the overall lifecycle of data as it traverses the bi(OS) platform from ingestion through to presentation?
What is the workflow for someone using bi(OS)?
What are some of the most interesting, innovative, or unexpected ways that you have seen bi(OS) used?
What have you found to be the most interesting, unexpected, or challenging aspects of building the bi(OS) platform?
When is it the wrong choice?
What do you have planned for the future of Isima and bi(OS)?
Contact Info
LinkedIn
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Links
Îsíma
Datastax
Verizon
AT&T
Click Fraud
ESB == Enterprise Service Bus
ETL == Extract, Transform, Load
EDW == Enterprise Data Warehouse
BI == Business Intelligence
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast