

Data Engineering Podcast
Tobias Macey
This show goes behind the scenes for the tools, techniques, and difficulties associated with the discipline of data engineering. Databases, workflows, automation, and data manipulation are just some of the topics that you will find here.
Episodes
Mentioned books

Jan 8, 2018 • 47min
Citus Data: Distributed PostGreSQL for Big Data with Ozgun Erdogan and Craig Kerstiens - Episode 13
Ozgun Erdogan and Craig Kerstiens from Citus Data discuss their work on scaling out PostGreSQL, including replication models, distributed backups, and upcoming features for real-time analytics. They also explore the considerations for deploying Citus and compare it to other offerings like Redshift and BigQuery.

Dec 25, 2017 • 59min
Wallaroo with Sean T. Allen - Episode 12
Summary
Data oriented applications that need to operate on large, fast-moving sterams of information can be difficult to build and scale due to the need to manage their state. In this episode Sean T. Allen, VP of engineering for Wallaroo Labs, explains how Wallaroo was designed and built to reduce the cognitive overhead of building this style of project. He explains the motivation for building Wallaroo, how it is implemented, and how you can start using it today.
Preamble
Hello and welcome to the Data Engineering Podcast, the show about modern data infrastructure
When you’re ready to launch your next project you’ll need somewhere to deploy it. Check out Linode at dataengineeringpodcast.com/linode and get a $20 credit to try out their fast and reliable Linux virtual servers for running your data pipelines or trying out the tools you hear about on the show.
Continuous delivery lets you get new features in front of your users as fast as possible without introducing bugs or breaking production and GoCD is the open source platform made by the people at Thoughtworks who wrote the book about it. Go to dataengineeringpodcast.com/gocd to download and launch it today. Enterprise add-ons and professional support are available for added peace of mind.
Go to dataengineeringpodcast.com to subscribe to the show, sign up for the newsletter, read the show notes, and get in touch.
You can help support the show by checking out the Patreon page which is linked from the site.
To help other people find the show you can leave a review on iTunes, or Google Play Music, and tell your friends and co-workers
Your host is Tobias Macey and today I’m interviewing Sean T. Allen about Wallaroo, a framework for building and operating stateful data applications at scale
Interview
Introduction
How did you get involved in the area of data engineering?
What is Wallaroo and how did the project get started?
What is the Pony language, and what features does it have that make it well suited for the problem area that you are focusing on?
Why did you choose to focus first on Python as the language for interacting with Wallaroo and how is that integration implemented?
How is Wallaroo architected internally to allow for distributed state management?
Is the state persistent, or is it only maintained long enough to complete the desired computation?
If so, what format do you use for long term storage of the data?
What have been the most challenging aspects of building the Wallaroo platform?
Which axes of the CAP theorem have you optimized for?
For someone who wants to build an application on top of Wallaroo, what is involved in getting started?
Once you have a working application, what resources are necessary for deploying to production and what are the scaling factors?
What are the failure modes that users of Wallaroo need to account for in their application or infrastructure?
What are some situations or problem types for which Wallaroo would be the wrong choice?
What are some of the most interesting or unexpected uses of Wallaroo that you have seen?
What do you have planned for the future of Wallaroo?
Contact Info
IRC
Mailing List
Wallaroo Labs Twitter
Email
Personal Twitter
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Links
Wallaroo Labs
Storm Applied
Apache Storm
Risk Analysis
Pony Language
Erlang
Akka
Tail Latency
High Performance Computing
Python
Apache Software Foundation
Beyond Distributed Transactions: An Apostate’s View
Consistent Hashing
Jepsen
Lineage Driven Fault Injection
Chaos Engineering
QCon 2016 Talk
Codemesh in London: How did I get here?
CAP Theorem
CRDT
Sync Free Project
Basho
Wallaroo on GitHub
Docker
Puppet
Chef
Ansible
SaltStack
Kafka
TCP
Dask
Data Engineering Episode About Dask
Beowulf Cluster
Redis
Flink
Haskell
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SASupport Data Engineering Podcast

Dec 18, 2017 • 34min
SiriDB: Scalable Open Source Timeseries Database with Jeroen van der Heijden - Episode 11
Summary
Time series databases have long been the cornerstone of a robust metrics system, but the existing options are often difficult to manage in production. In this episode Jeroen van der Heijden explains his motivation for writing a new database, SiriDB, the challenges that he faced in doing so, and how it works under the hood.
Preamble
Hello and welcome to the Data Engineering Podcast, the show about modern data infrastructure
When you’re ready to launch your next project you’ll need somewhere to deploy it. Check out Linode at dataengineeringpodcast.com/linode and get a $20 credit to try out their fast and reliable Linux virtual servers for running your data pipelines or trying out the tools you hear about on the show.
Continuous delivery lets you get new features in front of your users as fast as possible without introducing bugs or breaking production and GoCD is the open source platform made by the people at Thoughtworks who wrote the book about it. Go to dataengineeringpodcast.com/gocd to download and launch it today. Enterprise add-ons and professional support are available for added peace of mind.
Go to dataengineeringpodcast.com to subscribe to the show, sign up for the newsletter, read the show notes, and get in touch.
You can help support the show by checking out the Patreon page which is linked from the site.
To help other people find the show you can leave a review on iTunes, or Google Play Music, and tell your friends and co-workers
Your host is Tobias Macey and today I’m interviewing Jeroen van der Heijden about SiriDB, a next generation time series database
Interview
Introduction
How did you get involved in the area of data engineering?
What is SiriDB and how did the project get started?
What was the inspiration for the name?
What was the landscape of time series databases at the time that you first began work on Siri?
How does Siri compare to other time series databases such as InfluxDB, Timescale, KairosDB, etc.?
What do you view as the competition for Siri?
How is the server architected and how has the design evolved over the time that you have been working on it?
Can you describe how the clustering mechanism functions?
Is it possible to create pools with more than two servers?
What are the failure modes for SiriDB and where does it fall on the spectrum for the CAP theorem?
In the documentation it mentions needing to specify the retention period for the shards when creating a database. What is the reasoning for that and what happens to the individual metrics as they age beyond that time horizon?
One of the common difficulties when using a time series database in an operations context is the need for high cardinality of the metrics. How are metrics identified in Siri and is there any support for tagging?
What have been the most challenging aspects of building Siri?
In what situations or environments would you advise against using Siri?
Contact Info
joente on Github
LinkedIn
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Links
SiriDB
Oversight
InfluxDB
LevelDB
OpenTSDB
Timescale DB
KairosDB
Write Ahead Log
Grafana
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SASupport Data Engineering Podcast

Dec 10, 2017 • 49min
Confluent Schema Registry with Ewen Cheslack-Postava - Episode 10
Summary
To process your data you need to know what shape it has, which is why schemas are important. When you are processing that data in multiple systems it can be difficult to ensure that they all have an accurate representation of that schema, which is why Confluent has built a schema registry that plugs into Kafka. In this episode Ewen Cheslack-Postava explains what the schema registry is, how it can be used, and how they built it. He also discusses how it can be extended for other deployment targets and use cases, and additional features that are planned for future releases.
Preamble
Hello and welcome to the Data Engineering Podcast, the show about modern data infrastructure
When you’re ready to launch your next project you’ll need somewhere to deploy it. Check out Linode at dataengineeringpodcast.com/linode and get a $20 credit to try out their fast and reliable Linux virtual servers for running your data pipelines or trying out the tools you hear about on the show.
Continuous delivery lets you get new features in front of your users as fast as possible without introducing bugs or breaking production and GoCD is the open source platform made by the people at Thoughtworks who wrote the book about it. Go to dataengineeringpodcast.com/gocd to download and launch it today. Enterprise add-ons and professional support are available for added peace of mind.
Go to dataengineeringpodcast.com to subscribe to the show, sign up for the newsletter, read the show notes, and get in touch.
You can help support the show by checking out the Patreon page which is linked from the site.
To help other people find the show you can leave a review on iTunes, or Google Play Music, and tell your friends and co-workers
Your host is Tobias Macey and today I’m interviewing Ewen Cheslack-Postava about the Confluent Schema Registry
Interview
Introduction
How did you get involved in the area of data engineering?
What is the schema registry and what was the motivating factor for building it?
If you are using Avro, what benefits does the schema registry provide over and above the capabilities of Avro’s built in schemas?
How did you settle on Avro as the format to support and what would be involved in expanding that support to other serialization options?
Conversely, what would be involved in using a storage backend other than Kafka?
What are some of the alternative technologies available for people who aren’t using Kafka in their infrastructure?
What are some of the biggest challenges that you faced while designing and building the schema registry?
What is the tipping point in terms of system scale or complexity when it makes sense to invest in a shared schema registry and what are the alternatives for smaller organizations?
What are some of the features or enhancements that you have in mind for future work?
Contact Info
ewencp on GitHub
Website
@ewencp on Twitter
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Links
Kafka
Confluent
Schema Registry
Second Life
Eve Online
Yes, Virginia, You Really Do Need a Schema Registry
JSON-Schema
Parquet
Avro
Thrift
Protocol Buffers
Zookeeper
Kafka Connect
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SASupport Data Engineering Podcast

Dec 3, 2017 • 46min
data.world with Bryon Jacob - Episode 9
Summary
We have tools and platforms for collaborating on software projects and linking them together, wouldn’t it be nice to have the same capabilities for data? The team at data.world are working on building a platform to host and share data sets for public and private use that can be linked together to build a semantic web of information. The CTO, Bryon Jacob, discusses how the company got started, their mission, and how they have built and evolved their technical infrastructure.
Preamble
Hello and welcome to the Data Engineering Podcast, the show about modern data infrastructure
When you’re ready to launch your next project you’ll need somewhere to deploy it. Check out Linode at dataengineeringpodcast.com/linode and get a $20 credit to try out their fast and reliable Linux virtual servers for running your data pipelines or trying out the tools you hear about on the show.
Continuous delivery lets you get new features in front of your users as fast as possible without introducing bugs or breaking production and GoCD is the open source platform made by the people at Thoughtworks who wrote the book about it. Go to dataengineeringpodcast.com/gocd to download and launch it today. Enterprise add-ons and professional support are available for added peace of mind.
Go to dataengineeringpodcast.com to subscribe to the show, sign up for the newsletter, read the show notes, and get in touch.
You can help support the show by checking out the Patreon page which is linked from the site.
To help other people find the show you can leave a review on iTunes, or Google Play Music, and tell your friends and co-workers
This is your host Tobias Macey and today I’m interviewing Bryon Jacob about the technology and purpose that drive data.world
Interview
Introduction
How did you first get involved in the area of data management?
What is data.world and what is its mission and how does your status as a B Corporation tie into that?
The platform that you have built provides hosting for a large variety of data sizes and types. What does the technical infrastructure consist of and how has that architecture evolved from when you first launched?
What are some of the scaling problems that you have had to deal with as the amount and variety of data that you host has increased?
What are some of the technical challenges that you have been faced with that are unique to the task of hosting a heterogeneous assortment of data sets that intended for shared use?
How do you deal with issues of privacy or compliance associated with data sets that are submitted to the platform?
What are some of the improvements or new capabilities that you are planning to implement as part of the data.world platform?
What are the projects or companies that you consider to be your competitors?
What are some of the most interesting or unexpected uses of the data.world platform that you are aware of?
Contact Information
@bryonjacob on Twitter
bryonjacob on GitHub
LinkedIn
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Links
data.world
HomeAway
Semantic Web
Knowledge Engineering
Ontology
Open Data
RDF
CSVW
SPARQL
DBPedia
Triplestore
Header Dictionary Triples
Apache Jena
Tabula
Tableau Connector
Excel Connector
Data For Democracy
Jonathan Morgan
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SASupport Data Engineering Podcast

Nov 22, 2017 • 52min
Data Serialization Formats with Doug Cutting and Julien Le Dem - Episode 8
Summary
With the wealth of formats for sending and storing data it can be difficult to determine which one to use. In this episode Doug Cutting, creator of Avro, and Julien Le Dem, creator of Parquet, dig into the different classes of serialization formats, what their strengths are, and how to choose one for your workload. They also discuss the role of Arrow as a mechanism for in-memory data sharing and how hardware evolution will influence the state of the art for data formats.
Preamble
Hello and welcome to the Data Engineering Podcast, the show about modern data infrastructure
When you’re ready to launch your next project you’ll need somewhere to deploy it. Check out Linode at dataengineeringpodcast.com/linode and get a $20 credit to try out their fast and reliable Linux virtual servers for running your data pipelines or trying out the tools you hear about on the show.
Continuous delivery lets you get new features in front of your users as fast as possible without introducing bugs or breaking production and GoCD is the open source platform made by the people at Thoughtworks who wrote the book about it. Go to dataengineeringpodcast.com/gocd to download and launch it today. Enterprise add-ons and professional support are available for added peace of mind.
Go to dataengineeringpodcast.com to subscribe to the show, sign up for the newsletter, read the show notes, and get in touch.
You can help support the show by checking out the Patreon page which is linked from the site.
To help other people find the show you can leave a review on iTunes, or Google Play Music, and tell your friends and co-workers
This is your host Tobias Macey and today I’m interviewing Julien Le Dem and Doug Cutting about data serialization formats and how to pick the right one for your systems.
Interview
Introduction
How did you first get involved in the area of data management?
What are the main serialization formats used for data storage and analysis?
What are the tradeoffs that are offered by the different formats?
How have the different storage and analysis tools influenced the types of storage formats that are available?
You’ve each developed a new on-disk data format, Avro and Parquet respectively. What were your motivations for investing that time and effort?
Why is it important for data engineers to carefully consider the format in which they transfer their data between systems?
What are the switching costs involved in moving from one format to another after you have started using it in a production system?
What are some of the new or upcoming formats that you are each excited about?
How do you anticipate the evolving hardware, patterns, and tools for processing data to influence the types of storage formats that maintain or grow their popularity?
Contact Information
Doug:
cutting on GitHub
Blog
@cutting on Twitter
Julien
Email
@J_ on Twitter
Blog
julienledem on GitHub
Links
Apache Avro
Apache Parquet
Apache Arrow
Hadoop
Apache Pig
Xerox Parc
Excite
Nutch
Vertica
Dremel White Paper
Twitter Blog on Release of Parquet
CSV
XML
Hive
Impala
Presto
Spark SQL
Brotli
ZStandard
Apache Drill
Trevni
Apache Calcite
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

Nov 14, 2017 • 44min
Buzzfeed Data Infrastructure with Walter Menendez - Episode 7
Summary
Buzzfeed needs to be able to understand how its users are interacting with the myriad articles, videos, etc. that they are posting. This lets them produce new content that will continue to be well-received. To surface the insights that they need to grow their business they need a robust data infrastructure to reliably capture all of those interactions. Walter Menendez is a data engineer on their infrastructure team and in this episode he describes how they manage data ingestion from a wide array of sources and create an interface for their data scientists to produce valuable conclusions.
Preamble
Hello and welcome to the Data Engineering Podcast, the show about modern data management
When you’re ready to launch your next project you’ll need somewhere to deploy it. Check out Linode at dataengineeringpodcast.com/linode and get a $20 credit to try out their fast and reliable Linux virtual servers for running your data pipelines or trying out the tools you hear about on the show.
Continuous delivery lets you get new features in front of your users as fast as possible without introducing bugs or breaking production and GoCD is the open source platform made by the people at Thoughtworks who wrote the book about it. Go to dataengineeringpodcast.com/gocd to download and launch it today. Enterprise add-ons and professional support are available for added peace of mind.
Go to dataengineeringpodcast.com to subscribe to the show, sign up for the newsletter, read the show notes, and get in touch.
You can help support the show by checking out the Patreon page which is linked from the site.
To help other people find the show you can leave a review on iTunes, or Google Play Music, and tell your friends and co-workers
Your host is Tobias Macey and today I’m interviewing Walter Menendez about the data engineering platform at Buzzfeed
Interview
Introduction
How did you get involved in the area of data management?
How is the data engineering team at Buzzfeed structured and what kinds of projects are you responsible for?
What are some of the types of data inputs and outputs that you work with at Buzzfeed?
Is the core of your system using a real-time streaming approach or is it primarily batch-oriented and what are the business needs that drive that decision?
What does the architecture of your data platform look like and what are some of the most significant areas of technical debt?
Which platforms and languages are most widely leveraged in your team and what are some of the outliers?
What are some of the most significant challenges that you face, both technically and organizationally?
What are some of the dead ends that you have run into or failed projects that you have tried?
What has been the most successful project that you have completed and how do you measure that success?
Contact Info
@hackwalter on Twitter
walterm on GitHub
Links
Data Literacy
MIT Media Lab
Tumblr
Data Capital
Data Infrastructure
Google Analytics
Datadog
Python
Numpy
SciPy
NLTK
Go Language
NSQ
Tornado
PySpark
AWS EMR
Redshift
Tracking Pixel
Google Cloud
Don’t try to be google
Stop Hiring DevOps Engineers and Start Growing Them
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SASupport Data Engineering Podcast

Aug 6, 2017 • 43min
Astronomer with Ry Walker - Episode 6
Summary
Building a data pipeline that is reliable and flexible is a difficult task, especially when you have a small team. Astronomer is a platform that lets you skip straight to processing your valuable business data. Ry Walker, the CEO of Astronomer, explains how the company got started, how the platform works, and their commitment to open source.
Preamble
Hello and welcome to the Data Engineering Podcast, the show about modern data infrastructure
When you’re ready to launch your next project you’ll need somewhere to deploy it. Check out Linode at www.dataengineeringpodcast.com/linode?utm_source=rss&utm_medium=rss and get a $20 credit to try out their fast and reliable Linux virtual servers for running your data pipelines or trying out the tools you hear about on the show.
Go to dataengineeringpodcast.com to subscribe to the show, sign up for the newsletter, read the show notes, and get in touch.
You can help support the show by checking out the Patreon page which is linked from the site.
To help other people find the show you can leave a review on iTunes, or Google Play Music, and tell your friends and co-workers
This is your host Tobias Macey and today I’m interviewing Ry Walker, CEO of Astronomer, the platform for data engineering.
Interview
Introduction
How did you first get involved in the area of data management?
What is Astronomer and how did it get started?
Regulatory challenges of processing other people’s data
What does your data pipelining architecture look like?
What are the most challenging aspects of building a general purpose data management environment?
What are some of the most significant sources of technical debt in your platform?
Can you share some of the failures that you have encountered while architecting or building your platform and company and how you overcame them?
There are certain areas of the overall data engineering workflow that are well defined and have numerous tools to choose from. What are some of the unsolved problems in data management?
What are some of the most interesting or unexpected uses of your platform that you are aware of?
Contact Information
Email
@rywalker on Twitter
Links
Astronomer
Kiss Metrics
Segment
Marketing tools chart
Clickstream
HIPAA
FERPA
PCI
Mesos
Mesos DC/OS
Airflow
SSIS
Marathon
Prometheus
Grafana
Terraform
Kafka
Spark
ELK Stack
React
GraphQL
PostGreSQL
MongoDB
Ceph
Druid
Aries
Vault
Adapter Pattern
Docker
Kinesis
API Gateway
Kong
AWS Lambda
Flink
Redshift
NOAA
Informatica
SnapLogic
Meteor
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SASupport Data Engineering Podcast

Jun 18, 2017 • 42min
Rebuilding Yelp's Data Pipeline with Justin Cunningham - Episode 5
Summary
Yelp needs to be able to consume and process all of the user interactions that happen in their platform in as close to real-time as possible. To achieve that goal they embarked on a journey to refactor their monolithic architecture to be more modular and modern, and then they open sourced it! In this episode Justin Cunningham joins me to discuss the decisions they made and the lessons they learned in the process, including what worked, what didn’t, and what he would do differently if he was starting over today.
Preamble
Hello and welcome to the Data Engineering Podcast, the show about modern data infrastructure
When you’re ready to launch your next project you’ll need somewhere to deploy it. Check out Linode at www.dataengineeringpodcast.com/linode?utm_source=rss&utm_medium=rss and get a $20 credit to try out their fast and reliable Linux virtual servers for running your data pipelines or trying out the tools you hear about on the show.
Go to dataengineeringpodcast.com to subscribe to the show, sign up for the newsletter, read the show notes, and get in touch.
You can help support the show by checking out the Patreon page which is linked from the site.
To help other people find the show you can leave a review on iTunes, or Google Play Music, and tell your friends and co-workers
Your host is Tobias Macey and today I’m interviewing Justin Cunningham about Yelp’s data pipeline
Interview with Justin Cunningham
Introduction
How did you get involved in the area of data engineering?
Can you start by giving an overview of your pipeline and the type of workload that you are optimizing for?
What are some of the dead ends that you experienced while designing and implementing your pipeline?
As you were picking the components for your pipeline, how did you prioritize the build vs buy decisions and what are the pieces that you ended up building in-house?
What are some of the failure modes that you have experienced in the various parts of your pipeline and how have you engineered around them?
What are you using to automate deployment and maintenance of your various components and how do you monitor them for availability and accuracy?
While you were re-architecting your monolithic application into a service oriented architecture and defining the flows of data, how were you able to make the switch while verifying that you were not introducing unintended mutations into the data being produced?
Did you plan to open-source the work that you were doing from the start, or was that decision made after the project was completed? What were some of the challenges associated with making sure that it was properly structured to be amenable to making it public?
What advice would you give to anyone who is starting a brand new project and how would that advice differ for someone who is trying to retrofit a data management architecture onto an existing project?
Keep in touch
Yelp Engineering Blog
Email
Links
Kafka
Redshift
ETL
Business Intelligence
Change Data Capture
LinkedIn Data Bus
Apache Storm
Apache Flink
Confluent
Apache Avro
Game Days
Chaos Monkey
Simian Army
PaaSta
Apache Mesos
Marathon
SignalFX
Sensu
Thrift
Protocol Buffers
JSON Schema
Debezium
Kafka Connect
Apache Beam
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SASupport Data Engineering Podcast

Mar 18, 2017 • 35min
ScyllaDB with Eyal Gutkind - Episode 4
Summary
If you like the features of Cassandra DB but wish it ran faster with fewer resources then ScyllaDB is the answer you have been looking for. In this episode Eyal Gutkind explains how Scylla was created and how it differentiates itself in the crowded database market.
Preamble
Hello and welcome to the Data Engineering Podcast, the show about modern data infrastructure
Go to dataengineeringpodcast.com to subscribe to the show, sign up for the newsletter, read the show notes, and get in touch.
You can help support the show by checking out the Patreon page which is linked from the site.
To help other people find the show you can leave a review on iTunes, or Google Play Music, and tell your friends and co-workers
Your host is Tobias Macey and today I’m interviewing Eyal Gutkind about ScyllaDB
Interview
Introduction
How did you get involved in the area of data management?
What is ScyllaDB and why would someone choose to use it?
How do you ensure sufficient reliability and accuracy of the database engine?
The large draw of Scylla is that it is a drop in replacement of Cassandra with faster performance and no requirement to manage th JVM. What are some of the technical and architectural design choices that have enabled you to do that?
Deployment and tuning
What challenges are inroduced as a result of needing to maintain API compatibility with a diferent product?
Do you have visibility or advance knowledge of what new interfaces are being added to the Apache Cassandra project, or are you forced to play a game of keep up?
Are there any issues with compatibility of plugins for CassandraDB running on Scylla?
For someone who wants to deploy and tune Scylla, what are the steps involved?
Is it possible to join a Scylla cluster to an existing Cassandra cluster for live data migration and zero downtime swap?
What prompted the decision to form a company around the database?
What are some other uses of Seastar?
Keep in touch
Eyal
LinkedIn
ScyllaDB
Website
@ScyllaDB on Twitter
GitHub
Mailing List
Slack
Links
Seastar Project
DataStax
XFS
TitanDB
OpenTSDB
KairosDB
CQL
Pedis
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SASupport Data Engineering Podcast


