

Data Engineering Podcast
Tobias Macey
This show goes behind the scenes for the tools, techniques, and difficulties associated with the discipline of data engineering. Databases, workflows, automation, and data manipulation are just some of the topics that you will find here.
Episodes
Mentioned books

Jun 4, 2019 • 1h 2min
Evolving An ETL Pipeline For Better Productivity
Summary
Building an ETL pipeline can be a significant undertaking, and sometimes it needs to be rebuilt when a better option becomes available. In this episode Aaron Gibralter, director of engineering at Greenhouse, joins Raghu Murthy, founder and CEO of DataCoral, to discuss the journey that he and his team took from an in-house ETL pipeline built out of open source components onto a paid service. He explains how their original implementation was built, why they decided to migrate to a paid service, and how they made that transition. He also discusses how the abstractions provided by DataCoral allows his data scientists to remain productive without requiring dedicated data engineers. If you are either considering how to build a data pipeline or debating whether to migrate your existing ETL to a service this is definitely worth listening to for some perspective.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. And for your machine learning workloads, they just announced dedicated CPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show!
And to keep track of how your team is progressing on building new pipelines and tuning their workflows, you need a project management system designed by engineers, for engineers. Clubhouse lets you craft a workflow that fits your style, including per-team tasks, cross-project epics, a large suite of pre-built integrations, and a simple API for crafting your own. With such an intuitive tool it’s easy to make sure that everyone in the business is on the same page. Data Engineering Podcast listeners get 2 months free on any plan by going to dataengineeringpodcast.com/clubhouse today and signing up for a free trial. Support the show and get your data projects in order!
You listen to this show to learn and stay up to date with the ways that Python is being used, including the latest in machine learning and data analysis. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Dataversity, and the Open Data Science Conference. Coming up this fall is the combined events of Graphorum and the Data Architecture Summit. The agendas have been announced and super early bird registration for up to $300 off is available until July 26th, with early bird pricing for up to $200 off through August 30th. Use the code BNLLC to get an additional 10% off any pass when you register. Go to dataengineeringpodcast.com/conferences to learn more and take advantage of our partner discounts when you register.
You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Dataversity, and the Open Data Science Conference. Go to dataengineeringpodcast.com/conferences to learn more and take advantage of our partner discounts when you register.
Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Your host is Tobias Macey and today I’m interviewing Aaron Gibralter and Raghu Murthy about the experience of Greenhouse migrating their data pipeline to DataCoral
Interview
Introduction
How did you get involved in the area of data management?
Aaron, can you start by describing what Greenhouse is and some of the ways that you use data?
Can you describe your overall data infrastructure and the state of your data pipeline before migrating to DataCoral?
What are your primary sources of data and what are the targets that you are loading them into?
What were your biggest pain points and what motivated you to re-evaluate your approach to ETL?
What were your criteria for your replacement technology and how did you gather and evaluate your options?
Once you made the decision to use DataCoral can you talk through the transition and cut-over process?
What were some of the unexpected edge cases or shortcomings that you experienced when moving to DataCoral?
What were the big wins?
What was your evaluation framework for determining whether your re-engineering was successful?
Now that you are using DataCoral how would you characterize the experiences of yourself and your team?
If you have freed up time for your engineers, how are you allocating that spare capacity?
What do you hope to see from DataCoral in the future?
What advice do you have for anyone else who is either evaluating a re-architecture of their existing data platform or planning out a greenfield project?
Contact Info
Aaron
agribralter on GitHub
LinkedIn
Raghu
LinkedIn
Medium
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Links
Greenhouse
We’re hiring Data Scientists and Software Engineers!
Datacoral
Airflow
Podcast.init Interview
Data Engineering Interview about running Airflow in production
Periscope Data
Mode Analytics
Data Warehouse
ETL
Salesforce
Zendesk
Jira
DataDog
Asana
GDPR
Metabase
Podcast Interview
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

May 27, 2019 • 49min
Data Lineage For Your Pipelines
Summary
Some problems in data are well defined and benefit from a ready-made set of tools. For everything else, there’s Pachyderm, the platform for data science that is built to scale. In this episode Joe Doliner, CEO and co-founder, explains how Pachyderm started as an attempt to make data provenance easier to track, how the platform is architected and used today, and examples of how the underlying principles manifest in the workflows of data engineers and data scientists as they collaborate on data projects. In addition to all of that he also shares his thoughts on their recent round of fund-raising and where the future will take them. If you are looking for a set of tools for building your data science workflows then Pachyderm is a solid choice, featuring data versioning, first class tracking of data lineage, and language agnostic data pipelines.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. And for your machine learning workloads, they just announced dedicated CPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show!
Alluxio is an open source, distributed data orchestration layer that makes it easier to scale your compute and your storage independently. By transparently pulling data from underlying silos, Alluxio unlocks the value of your data and allows for modern computation-intensive workloads to become truly elastic and flexible for the cloud. With Alluxio, companies like Barclays, JD.com, Tencent, and Two Sigma can manage data efficiently, accelerate business analytics, and ease the adoption of any cloud. Go to dataengineeringpodcast.com/alluxio today to learn more and thank them for their support.
Understanding how your customers are using your product is critical for businesses of any size. To make it easier for startups to focus on delivering useful features Segment offers a flexible and reliable data infrastructure for your customer analytics and custom events. You only need to maintain one integration to instrument your code and get a future-proof way to send data to over 250 services with the flip of a switch. Not only does it free up your engineers’ time, it lets your business users decide what data they want where. Go to dataengineeringpodcast.com/segmentio today to sign up for their startup plan and get $25,000 in Segment credits and $1 million in free software from marketing and analytics companies like AWS, Google, and Intercom. On top of that you’ll get access to Analytics Academy for the educational resources you need to become an expert in data analytics for measuring product-market fit.
You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Dataversity, and the Open Data Science Conference. Go to dataengineeringpodcast.com/conferences to learn more and take advantage of our partner discounts when you register.
Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Your host is Tobias Macey and today I’m interviewing Joe Doliner about Pachyderm, a platform that lets you deploy and manage multi-stage, language-agnostic data pipelines while maintaining complete reproducibility and provenance
Interview
Introduction
How did you get involved in the area of data management?
Can you start by explaining what Pachyderm is and how it got started?
What is new in the last two years since I talked to Dan Whitenack in episode 1?
How have the changes and additional features in Kubernetes impacted your work on Pachyderm?
A recent development in the Kubernetes space is the Kubeflow project. How do its capabilities compare with or complement what you are doing in Pachyderm?
Can you walk through the overall workflow for someone building an analysis pipeline in Pachyderm?
How does that break down across different roles and responsibilities (e.g. data scientist vs data engineer)?
There are a lot of concepts and moving parts in Pachyderm, from getting a Kubernetes cluster set up, to understanding the file system and processing pipeline, to understanding best practices. What are some of the common challenges or points of confusion that new users encounter?
Data provenance is critical for understanding the end results of an analysis or ML model. Can you explain how the tracking in Pachyderm is implemented?
What is the interface for exposing and exploring that provenance data?
What are some of the advanced capabilities of Pachyderm that you would like to call out?
With your recent round of fundraising I’m assuming there is new pressure to grow and scale your product and business. How are you approaching that and what are some of the challenges you are facing?
What have been some of the most challenging/useful/unexpected lessons that you have learned in the process of building, maintaining, and growing the Pachyderm project and company?
What do you have planned for the future of Pachyderm?
Contact Info
@jdoliner on Twitter
LinkedIn
jdoliner on GitHub
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Links
Pachyderm
RethinkDB
AirBnB
Data Provenance
Kubeflow
Stateful Sets
EtcD
Airflow
Kafka
GitHub
GitLab
Docker
Kubernetes
CI == Continuous Integration
CD == Continuous Delivery
Ceph
Podcast Interview
Object Storage
MiniKube
FUSE == File System In User Space
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

16 snips
May 20, 2019 • 57min
Build Your Data Analytics Like An Engineer With DBT
Summary
In recent years the traditional approach to building data warehouses has shifted from transforming records before loading, to transforming them afterwards. As a result, the tooling for those transformations needs to be reimagined. The data build tool (dbt) is designed to bring battle tested engineering practices to your analytics pipelines. By providing an opinionated set of best practices it simplifies collaboration and boosts confidence in your data teams. In this episode Drew Banin, creator of dbt, explains how it got started, how it is designed, and how you can start using it today to create reliable and well-tested reports in your favorite data warehouse.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. And for your machine learning workloads, they just announced dedicated CPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show!
Understanding how your customers are using your product is critical for businesses of any size. To make it easier for startups to focus on delivering useful features Segment offers a flexible and reliable data infrastructure for your customer analytics and custom events. You only need to maintain one integration to instrument your code and get a future-proof way to send data to over 250 services with the flip of a switch. Not only does it free up your engineers’ time, it lets your business users decide what data they want where. Go to dataengineeringpodcast.com/segmentio today to sign up for their startup plan and get $25,000 in Segment credits and $1 million in free software from marketing and analytics companies like AWS, Google, and Intercom. On top of that you’ll get access to Analytics Academy for the educational resources you need to become an expert in data analytics for measuring product-market fit.
You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Dataversity, and the Open Data Science Conference. Go to dataengineeringpodcast.com/conferences to learn more and take advantage of our partner discounts when you register.
Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Your host is Tobias Macey and today I’m interviewing Drew Banin about DBT, the Data Build Tool, a toolkit for building analytics the way that developers build applications
Interview
Introduction
How did you get involved in the area of data management?
Can you start by explaining what DBT is and your motivation for creating it?
Where does it fit in the overall landscape of data tools and the lifecycle of data in an analytics pipeline?
Can you talk through the workflow for someone using DBT?
One of the useful features of DBT for stability of analytics is the ability to write and execute tests. Can you explain how those are implemented?
The packaging capabilities are beneficial for enabling collaboration. Can you talk through how the packaging system is implemented?
Are these packages driven by Fishtown Analytics or the dbt community?
What are the limitations of modeling everything as a SELECT statement?
Making SQL code reusable is notoriously difficult. How does the Jinja templating of DBT address this issue and what are the shortcomings?
What are your thoughts on higher level approaches to SQL that compile down to the specific statements?
Can you explain how DBT is implemented and how the design has evolved since you first began working on it?
What are some of the features of DBT that are often overlooked which you find particularly useful?
What are some of the most interesting/unexpected/innovative ways that you have seen DBT used?
What are the additional features that the commercial version of DBT provides?
What are some of the most useful or challenging lessons that you have learned in the process of building and maintaining DBT?
When is it the wrong choice?
What do you have planned for the future of DBT?
Contact Info
Email
@drebanin on Twitter
drebanin on GitHub
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Links
DBT
Fishtown Analytics
8Tracks Internet Radio
Redshift
Magento
Stitch Data
Fivetran
Airflow
Business Intelligence
Jinja template language
BigQuery
Snowflake
Version Control
Git
Continuous Integration
Test Driven Development
Snowplow Analytics
Podcast Episode
dbt-utils
We Can Do Better Than SQL blog post from EdgeDB
EdgeDB
Looker LookML
Podcast Interview
Presto DB
Podcast Interview
Spark SQL
Hive
Azure SQL Data Warehouse
Data Warehouse
Data Lake
Data Council Conference
Slowly Changing Dimensions
dbt Archival
Mode Analytics
Periscope BI
dbt docs
dbt repository
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

May 7, 2019 • 1h 6min
Using FoundationDB As The Bedrock For Your Distributed Systems
Ryan Worl, a software engineer deeply involved with FoundationDB, dives into the intricacies of this powerful distributed key-value store. He discusses its unique architecture and how to set it up for diverse applications while ensuring ACID compliance. Worl shares insights on optimizing performance in distributed systems, handling conflicts, and the role of interoperability between data layers. The conversation also touches on effective testing and deployment strategies and the challenges companies face in modern data management.

Apr 29, 2019 • 51min
Running Your Database On Kubernetes With KubeDB
Summary
Kubernetes is a driving force in the renaissance around deploying and running applications. However, managing the database layer is still a separate concern. The KubeDB project was created as a way of providing a simple mechanism for running your storage system in the same platform as your application. In this episode Tamal Saha explains how the KubeDB project got started, why you might want to run your database with Kubernetes, and how to get started. He also covers some of the challenges of managing stateful services in Kubernetes and how the fast pace of the community has contributed to the evolution of KubeDB. If you are at any stage of a Kubernetes implementation, or just thinking about it, this is definitely worth a listen to get some perspective on how to leverage it for your entire application stack.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. And for your machine learning workloads, they just announced dedicated CPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show!
Alluxio is an open source, distributed data orchestration layer that makes it easier to scale your compute and your storage independently. By transparently pulling data from underlying silos, Alluxio unlocks the value of your data and allows for modern computation-intensive workloads to become truly elastic and flexible for the cloud. With Alluxio, companies like Barclays, JD.com, Tencent, and Two Sigma can manage data efficiently, accelerate business analytics, and ease the adoption of any cloud. Go to dataengineeringpodcast.com/alluxio today to learn more and thank them for their support.
Understanding how your customers are using your product is critical for businesses of any size. To make it easier for startups to focus on delivering useful features Segment offers a flexible and reliable data infrastructure for your customer analytics and custom events. You only need to maintain one integration to instrument your code and get a future-proof way to send data to over 250 services with the flip of a switch. Not only does it free up your engineers’ time, it lets your business users decide what data they want where. Go to dataengineeringpodcast.com/segmentio today to sign up for their startup plan and get $25,000 in Segment credits and $1 million in free software from marketing and analytics companies like AWS, Google, and Intercom. On top of that you’ll get access to Analytics Academy for the educational resources you need to become an expert in data analytics for measuring product-market fit.
You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Dataversity, and the Open Data Science Conference. Go to dataengineeringpodcast.com/conferences to learn more and take advantage of our partner discounts when you register.
Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Your host is Tobias Macey and today I’m interviewing Tamal Saha about KubeDB, a project focused on making running production-grade databases easy on Kubernetes
Interview
Introduction
How did you get involved in the area of data management?
Can you start by explaining what KubeDB is and how the project got started?
What are the main challenges associated with running a stateful system on top of Kubernetes?
Why would someone want to run their database on a container platform rather than on a dedicated instance or with a hosted service?
Can you describe how KubeDB is implemented and how that has evolved since you first started working on it?
Can you talk through how KubeDB simplifies the process of deploying and maintaining databases?
What is involved in adding support for a new database?
How do the requirements change for systems that are natively clustered?
How does KubeDB help with maintenance processes around upgrading existing databases to newer versions?
How does the work that you are doing on KubeDB compare to what is available in StorageOS?
Are there any other projects that are targeting similar goals?
What have you found to be the most interesting/challenging/unexpected aspects of building KubeDB?
What do you have planned for the future of the project?
Contact Info
LinkedIn
@tsaha on Twitter
Email
tamalsaha on GitHub
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Links
KubeDB
AppsCode
Kubernetes
Kubernetes CRD (Custom Resource Definition)
Kubernetes Operator
Kubernetes Stateful Sets
PostgreSQL
Podcast Interview
Hashicorp Vault
Redis
Elasticsearch
Podcast Interview
MySQL
Memcached
MongoDB
Docker
Rook Storage Orchestration for Kubernetes
Ceph
Podcast Interview
EBS
StorageOS
GlusterFS
OpenEBS
CloudFoundry
AppsCode Service Broker
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

Apr 22, 2019 • 54min
Unpacking Fauna: A Global Scale Cloud Native Database
Summary
One of the biggest challenges for any business trying to grow and reach customers globally is how to scale their data storage. FaunaDB is a cloud native database built by the engineers behind Twitter’s infrastructure and designed to serve the needs of modern systems. Evan Weaver is the co-founder and CEO of Fauna and in this episode he explains the unique capabilities of Fauna, compares the consensus and transaction algorithm to that used in other NewSQL systems, and describes the ways that it allows for new application design patterns. One of the unique aspects of Fauna that is worth drawing attention to is the first class support for temporality that simplifies querying of historical states of the data. It is definitely worth a good look for anyone building a platform that needs a simple to manage data layer that will scale with your business.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. And for your machine learning workloads, they just announced dedicated CPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show!
Alluxio is an open source, distributed data orchestration layer that makes it easier to scale your compute and your storage independently. By transparently pulling data from underlying silos, Alluxio unlocks the value of your data and allows for modern computation-intensive workloads to become truly elastic and flexible for the cloud. With Alluxio, companies like Barclays, JD.com, Tencent, and Two Sigma can manage data efficiently, accelerate business analytics, and ease the adoption of any cloud. Go to dataengineeringpodcast.com/alluxio today to learn more and thank them for their support.
Understanding how your customers are using your product is critical for businesses of any size. To make it easier for startups to focus on delivering useful features Segment offers a flexible and reliable data infrastructure for your customer analytics and custom events. You only need to maintain one integration to instrument your code and get a future-proof way to send data to over 250 services with the flip of a switch. Not only does it free up your engineers’ time, it lets your business users decide what data they want where. Go to dataengineeringpodcast.com/segmentio today to sign up for their startup plan and get $25,000 in Segment credits and $1 million in free software from marketing and analytics companies like AWS, Google, and Intercom. On top of that you’ll get access to Analytics Academy for the educational resources you need to become an expert in data analytics for measuring product-market fit.
You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Dataversity, and the Open Data Science Conference. Go to dataengineeringpodcast.com/conferences to learn more and take advantage of our partner discounts when you register.
Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Your host is Tobias Macey and today I’m interviewing Evan Weaver about FaunaDB, a modern operational data platform built for your cloud
Interview
Introduction
How did you get involved in the area of data management?
Can you start by explaining what FaunaDB is and how it got started?
What are some of the main use cases that FaunaDB is targeting?
How does it compare to some of the other global scale databases that have been built in recent years such as CockroachDB?
Can you describe the architecture of FaunaDB and how it has evolved?
The consensus and replication protocol in Fauna is intriguing. Can you talk through how it works?
What are some of the edge cases that users should be aware of?
How are conflicts managed in Fauna?
What is the underlying storage layer?
How is the query layer designed to allow for different query patterns and model representations?
How does data modeling in Fauna compare to that of relational or document databases?
Can you describe the query format?
What are some of the common difficulties or points of confusion around interacting with data in Fauna?
What are some application design patterns that are enabled by using Fauna as the storage layer?
Given the ability to replicate globally, how do you mitigate latency when interacting with the database?
What are some of the most interesting or unexpected ways that you have seen Fauna used?
When is it the wrong choice?
What have been some of the most interesting/unexpected/challenging aspects of building the Fauna database and company?
What do you have in store for the future of Fauna?
Contact Info
@evan on Twitter
LinkedIn
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Links
Fauna
Ruby on Rails
CNET
GitHub
Twitter
NoSQL
Cassandra
InnoDB
Redis
Memcached
Timeseries
Spanner Paper
DynamoDB Paper
Percolator
ACID
Calvin Protocol
Daniel Abadi
LINQ
LSM Tree (Log-structured Merge-tree)
Scala
Change Data Capture
GraphQL
Podcast.init Interview About Graphene
Fauna Query Language (FQL)
CQL == Cassandra Query Language
Object-Relational Databases
LDAP == Lightweight Directory Access Protocol
Auth0
OLAP == Online Analytical Processing
Jepsen distributed systems safety research
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

Apr 15, 2019 • 44min
Index Your Big Data With Pilosa For Faster Analytics
Summary
Database indexes are critical to ensure fast lookups of your data, but they are inherently tied to the database engine. Pilosa is rewriting that equation by providing a flexible, scalable, performant engine for building an index of your data to enable high-speed aggregate analysis. In this episode Seebs explains how Pilosa fits in the broader data landscape, how it is architected, and how you can start using it for your own analysis. This was an interesting exploration of a different way to look at what a database can be.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. And for your machine learning workloads, they just announced dedicated CPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show!
Alluxio is an open source, distributed data orchestration layer that makes it easier to scale your compute and your storage independently. By transparently pulling data from underlying silos, Alluxio unlocks the value of your data and allows for modern computation-intensive workloads to become truly elastic and flexible for the cloud. With Alluxio, companies like Barclays, JD.com, Tencent, and Two Sigma can manage data efficiently, accelerate business analytics, and ease the adoption of any cloud. Go to dataengineeringpodcast.com/alluxio today to learn more and thank them for their support.
Understanding how your customers are using your product is critical for businesses of any size. To make it easier for startups to focus on delivering useful features Segment offers a flexible and reliable data infrastructure for your customer analytics and custom events. You only need to maintain one integration to instrument your code and get a future-proof way to send data to over 250 services with the flip of a switch. Not only does it free up your engineers’ time, it lets your business users decide what data they want where. Go to dataengineeringpodcast.com/segmentio today to sign up for their startup plan and get $25,000 in Segment credits and $1 million in free software from marketing and analytics companies like AWS, Google, and Intercom. On top of that you’ll get access to Analytics Academy for the educational resources you need to become an expert in data analytics for measuring product-market fit.
You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Dataversity, and the Open Data Science Conference. Go to dataengineeringpodcast.com/conferences to learn more and take advantage of our partner discounts when you register.
Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Your host is Tobias Macey and today I’m interviewing Seebs about Pilosa, an open source, distributed bitmap index
Interview
Introduction
How did you get involved in the area of data management?
Can you start by describing what Pilosa is and how the project got started?
Where does Pilosa fit into the overall data ecosystem and how does it integrate into an existing stack?
What types of use cases is Pilosa uniquely well suited for?
The Pilosa data model is fairly unique. Can you talk through how it is represented and implemented?
What are some approaches to modeling data that might be coming from a relational database or some structured flat files?
How do you handle highly dimensional data?
What are some of the decisions that need to be made early in the modeling process which could have ramifications later on in the lifecycle of the project?
What are the scaling factors of Pilosa?
What are some of the most interesting/challenging/unexpected lessons that you have learned in the process of building Pilosa?
What is in store for the future of Pilosa?
Contact Info
Pilosa
Website
Email
@slothware on Twitter
Seebs
seebs on GitHub
Website
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Links
PQL (Pilosa Query Language)
Roaring Bitmap
Whitepaper
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

Apr 8, 2019 • 54min
Serverless Data Pipelines On DataCoral
Summary
How much time do you spend maintaining your data pipeline? How much end user value does that provide? Raghu Murthy founded DataCoral as a way to abstract the low level details of ETL so that you can focus on the actual problem that you are trying to solve. In this episode he explains his motivation for building the DataCoral platform, how it is leveraging serverless computing, the challenges of delivering software as a service to customer environments, and the architecture that he has designed to make batch data management easier to work with. This was a fascinating conversation with someone who has spent his entire career working on simplifying complex data problems.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. And for your machine learning workloads, they just announced dedicated CPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show!
Managing and auditing access to your servers and databases is a problem that grows in difficulty alongside the growth of your teams. If you are tired of wasting your time cobbling together scripts and workarounds to give your developers, data scientists, and managers the permissions that they need then it’s time to talk to our friends at strongDM. They have built an easy to use platform that lets you leverage your company’s single sign on for your data platform. Go to dataengineeringpodcast.com/strongdm today to find out how you can simplify your systems.
Alluxio is an open source, distributed data orchestration layer that makes it easier to scale your compute and your storage independently. By transparently pulling data from underlying silos, Alluxio unlocks the value of your data and allows for modern computation-intensive workloads to become truly elastic and flexible for the cloud. With Alluxio, companies like Barclays, JD.com, Tencent, and Two Sigma can manage data efficiently, accelerate business analytics, and ease the adoption of any cloud. Go to dataengineeringpodcast.com/alluxio today to learn more and thank them for their support.
You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Dataversity, and the Open Data Science Conference. Go to dataengineeringpodcast.com/conferences to learn more and take advantage of our partner discounts when you register.
Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Your host is Tobias Macey and today I’m interviewing Raghu Murthy about DataCoral, a platform that offers a fully managed and secure stack in your own cloud that delivers data to where you need it
Interview
Introduction
How did you get involved in the area of data management?
Can you start by explaining what DataCoral is and your motivation for founding it?
How does the data-centric approach of DataCoral differ from the way that other platforms think about processing information?
Can you describe how the DataCoral platform is designed and implemented, and how it has evolved since you first began working on it?
How does the concept of a data slice play into the overall architecture of your platform?
How do you manage transformations of data schemas and formats as they traverse different slices in your platform?
On your site it mentions that you have the ability to automatically adjust to changes in external APIs, can you discuss how that manifests?
What has been your experience, both positive and negative, in building on top of serverless components?
Can you discuss the customer experience of onboarding onto Datacoral and how it differs between existing data platforms and greenfield projects?
What are some of the slices that have proven to be the most challenging to implement?
Are there any that you are currently building that you are most excited for?
How much effort do you anticipate if and/or when you begin to support other cloud providers?
When is Datacoral the wrong choice?
What do you have planned for the future of Datacoral, both from a technical and business perspective?
Contact Info
LinkedIn
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Links
Datacoral
Yahoo!
Apache Hive
Relational Algebra
Social Capital
EIR == Entrepreneur In Residence
Spark
Kafka
AWS Lambda
DAG == Directed Acyclic Graph
AWS Redshift
AWS Athena
AWS Glue
Noisy Neighbor Problem
CI/CD
SnowflakeDB
DataBricks Delta
AWS Sagemaker
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

Apr 1, 2019 • 37min
Why Analytics Projects Fail And What To Do About It
Summary
Analytics projects fail all the time, resulting in lost opportunities and wasted resources. There are a number of factors that contribute to that failure and not all of them are under our control. However, many of them are and as data engineers we can help to keep our projects on the path to success. Eugene Khazin is the CEO of PrimeTSR where he is tasked with rescuing floundering analytics efforts and ensuring that they provide value to the business. In this episode he reflects on the ways that data projects can be structured to provide a higher probability of success and utility, how data engineers can get throughout the project lifecycle, and how to salvage a failed project so that some value can be gained from the effort.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. And for your machine learning workloads, they just announced dedicated CPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show!
Managing and auditing access to your servers and databases is a problem that grows in difficulty alongside the growth of your teams. If you are tired of wasting your time cobbling together scripts and workarounds to give your developers, data scientists, and managers the permissions that they need then it’s time to talk to our friends at strongDM. They have built an easy to use platform that lets you leverage your company’s single sign on for your data platform. Go to dataengineeringpodcast.com/strongdm today to find out how you can simplify your systems.
Alluxio is an open source, distributed data orchestration layer that makes it easier to scale your compute and your storage independently. By transparently pulling data from underlying silos, Alluxio unlocks the value of your data and allows for modern computation-intensive workloads to become truly elastic and flexible for the cloud. With Alluxio, companies like Barclays, JD.com, Tencent, and Two Sigma can manage data efficiently, accelerate business analytics, and ease the adoption of any cloud. Go to dataengineeringpodcast.com/alluxio today to learn more and thank them for their support.
You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Dataversity, and the Open Data Science Conference. Go to dataengineeringpodcast.com/conferences to learn more and take advantage of our partner discounts when you register.
Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Your host is Tobias Macey and today I’m interviewing Eugene Khazin about the leading causes for failure in analytics projects
Interview
Introduction
How did you get involved in the area of data management?
The term "analytics" has grown to mean many different things to different people, so can you start by sharing your definition of what is in scope for an "analytics project" for the purposes of this discussion?
What are the criteria that you and your customers use to determine the success or failure of a project?
I was recently speaking with someone who quoted a Gartner report stating an estimated failure rate of ~80% for analytics projects. Has your experience reflected this reality, and what have you found to be the leading causes of failure in your experience at PrimeTSR?
As data engineers, what strategies can we pursue to increase the success rate of the projects that we work on?
What are the contributing factors that are beyond our control, which we can help identify and surface early in the lifecycle of a project?
In the event of a failed project, what are the lessons that we can learn and fold into our future work?
How can we salvage a project and derive some value from the efforts that we have put into it?
What are some useful signals to identify when a project is on the road to failure, and steps that can be taken to rescue it?
What advice do you have for data engineers to help them be more active and effective in the lifecycle of an analytics project?
Contact Info
Email
LinkedIn
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Links
Prime TSR
Descriptive, Predictive, and Prescriptive Analytics
Azure Data Factory
Azure Data Warehouse
Mulesoft
SSIS (SQL Server Integration Services)
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

5 snips
Mar 25, 2019 • 58min
Building An Enterprise Data Fabric At CluedIn
Summary
Data integration is one of the most challenging aspects of any data platform, especially as the variety of data sources and formats grow. Enterprise organizations feel this acutely due to the silos that occur naturally across business units. The CluedIn team experienced this issue first-hand in their previous roles, leading them to build a business aimed at building a managed data fabric for the enterprise. In this episode Tim Ward, CEO of CluedIn, joins me to explain how their platform is architected, how they manage the task of integrating with third-party platforms, automating entity extraction and master data management, and the work of providing multiple views of the same data for different use cases. I highly recommend listening closely to his explanation of how they manage consistency of the data that they process across different storage backends.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. And for your machine learning workloads, they just announced dedicated CPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show!
Managing and auditing access to your servers and databases is a problem that grows in difficulty alongside the growth of your teams. If you are tired of wasting your time cobbling together scripts and workarounds to give your developers, data scientists, and managers the permissions that they need then it’s time to talk to our friends at strongDM. They have built an easy to use platform that lets you leverage your company’s single sign on for your data platform. Go to dataengineeringpodcast.com/strongdm today to find out how you can simplify your systems.
Alluxio is an open source, distributed data orchestration layer that makes it easier to scale your compute and your storage independently. By transparently pulling data from underlying silos, Alluxio unlocks the value of your data and allows for modern computation-intensive workloads to become truly elastic and flexible for the cloud. With Alluxio, companies like Barclays, JD.com, Tencent, and Two Sigma can manage data efficiently, accelerate business analytics, and ease the adoption of any cloud. Go to dataengineeringpodcast.com/alluxio today to learn more and thank them for their support.
You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Dataversity, and the Open Data Science Conference. Go to dataengineeringpodcast.com/conferences to learn more and take advantage of our partner discounts when you register.
Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Your host is Tobias Macey and today I’m interviewing Tim Ward about CluedIn, an integration platform for implementing your companies data fabric
Interview
Introduction
How did you get involved in the area of data management?
Before we get started, can you share your definition of what a data fabric is?
Can you explain what CluedIn is and share the story of how it started?
Can you describe your ideal customer?
What are some of the primary ways that organizations are using CluedIn?
Can you give an overview of the system architecture that you have built and how it has evolved since you first began building it?
For a new customer of CluedIn, what is involved in the onboarding process?
What are some of the most challenging aspects of data integration?
What is your approach to managing the process of cleaning the data that you are ingesting?
How much domain knowledge from a business or industry perspective do you incorporate during onboarding and ongoing execution?
How do you preserve and expose data lineage/provenance to your customers?
How do you manage changes or breakage in the interfaces that you use for source or destination systems?
What are some of the signals that you monitor to ensure the continued healthy operation of your platform?
What are some of the most notable customer success stories that you have experienced?
Are there any notable failures that you have experienced, and if so, what were the lessons learned?
What are some cases where CluedIn is not the right choice?
What do you have planned for the future of CluedIn?
Contact Info
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Links
CluedIn
Copenhagen, Denmark
A/B Testing
Data Fabric
Dataiku
RapidMiner
Azure Machine Learning Studio
CRM (Customer Relationship Management)
Graph Database
Data Lake
GraphQL
DGraph
Podcast Episode
RabbitMQ
GDPR (General Data Protection Regulation)
Master Data Management
Podcast Interview
OAuth
Docker
Kubernetes
Helm
DevOps
DataOps
DevOps vs DataOps Podcast Interview
Kafka
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast


