

Data Engineering Podcast
Tobias Macey
This show goes behind the scenes for the tools, techniques, and difficulties associated with the discipline of data engineering. Databases, workflows, automation, and data manipulation are just some of the topics that you will find here.
Episodes
Mentioned books

Mar 18, 2019 • 55min
A DataOps vs DevOps Cookoff In The Data Kitchen
Summary
Delivering a data analytics project on time and with accurate information is critical to the success of any business. DataOps is a set of practices to increase the probability of success by creating value early and often, and using feedback loops to keep your project on course. In this episode Chris Bergh, head chef of Data Kitchen, explains how DataOps differs from DevOps, how the industry has begun adopting DataOps, and how to adopt an agile approach to building your data platform.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. And for your machine learning workloads, they just announced dedicated CPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show!
Managing and auditing access to your servers and databases is a problem that grows in difficulty alongside the growth of your teams. If you are tired of wasting your time cobbling together scripts and workarounds to give your developers, data scientists, and managers the permissions that they need then it’s time to talk to our friends at strongDM. They have built an easy to use platform that lets you leverage your company’s single sign on for your data platform. Go to dataengineeringpodcast.com/strongdm today to find out how you can simplify your systems.
"There aren’t enough data conferences out there that focus on the community, so that’s why these folks built a better one": Data Council is the premier community powered data platforms & engineering event for software engineers, data engineers, machine learning experts, deep learning researchers & artificial intelligence buffs who want to discover tools & insights to build new products. This year they will host over 50 speakers and 500 attendees (yeah that’s one of the best "Attendee:Speaker" ratios out there) in San Francisco on April 17-18th and are offering a $200 discount to listeners of the Data Engineering Podcast. Use code: DEP-200 at checkout
You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Dataversity, and the Open Data Science Conference. Go to dataengineeringpodcast.com/conferences to learn more and take advantage of our partner discounts when you register.
Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Your host is Tobias Macey and today I’m interviewing Chris Bergh about the current state of DataOps and why it’s more than just DevOps for data
Interview
Introduction
How did you get involved in the area of data management?
We talked last year about what DataOps is, but can you give a quick overview of how the industry has changed or updated the definition since then?
It is easy to draw parallels between DataOps and DevOps, can you provide some clarity as to how they are different?
How has the conversation around DataOps influenced the design decisions of platforms and system components that are targeting the "big data" and data analytics ecosystem?
One of the commonalities is the desire to use collaboration as a means of reducing silos in a business. In the data management space, those silos are often in the form of distinct storage systems, whether application databases, corporate file shares, CRM systems, etc. What are some techniques that are rooted in the principles of DataOps that can help unify those data systems?
Another shared principle is in the desire to create feedback cycles. How do those feedback loops manifest in the lifecycle of an analytics project?
Testing is critical to ensure the continued health and success of a data project. What are some of the current utilities that are available to data engineers for building and executing tests to cover the data lifecycle, from collection through to analysis and delivery?
What are some of the components of a data analytics lifecycle that are resistant to agile or iterative development?
With the continued rise in the use of machine learning in production, how does that change the requirements for delivery and maintenance of an analytics platform?
What are some of the trends that you are most excited for in the analytics and data platform space?
Contact Info
Data Kitchen
Email
Chris
LinkedIn
@ChrisBergh on Twitter
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Links
Download the "DataOps Cookbook"
Data Kitchen
Peace Corps
MIT
NASA
Meyer’s Briggs Personality Test
HBR (Harvard Business Review)
MBA (Master of Business Administration)
W. Edwards Deming
DevOps
Lean Manufacturing
Tableau
Excel
Airflow
Podcast.init Interview
Looker
Podcast Interview
R Language
Alteryx
Data Lake
Data Literacy
Data Governance
Datadog
Kubernetes
Kubeflow
Metis Machine
Gartner Hype Cycle
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

Mar 4, 2019 • 48min
Customer Analytics At Scale With Segment
Summary
Customer analytics is a problem domain that has given rise to its own industry. In order to gain a full understanding of what your users are doing and how best to serve them you may need to send data to multiple services, each with their own tracking code or APIs. To simplify this process and allow your non-engineering employees to gain access to the information they need to do their jobs Segment provides a single interface for capturing data and routing it to all of the places that you need it. In this interview Segment CTO and co-founder Calvin French-Owen explains how the company got started, how it manages to multiplex data streams from multiple sources to multiple destinations, and how it can simplify your work of gaining visibility into how your customers are engaging with your business.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. And for your machine learning workloads, they just announced dedicated CPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show!
Managing and auditing access to your servers and databases is a problem that grows in difficulty alongside the growth of your teams. If you are tired of wasting your time cobbling together scripts and workarounds to give your developers, data scientists, and managers the permissions that they need then it’s time to talk to our friends at strongDM. They have built an easy to use platform that lets you leverage your company’s single sign on for your data platform. Go to dataengineeringpodcast.com/strongdm today to find out how you can simplify your systems.
Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with O’Reilly Media for the Strata conference in San Francisco on March 25th and the Artificial Intelligence conference in NYC on April 15th. Here in Boston, starting on May 17th, you still have time to grab a ticket to the Enterprise Data World, and from April 30th to May 3rd is the Open Data Science Conference. Go to dataengineeringpodcast.com/conferences to learn more and take advantage of our partner discounts when you register.
Your host is Tobias Macey and today I’m interviewing Calvin French-Owen about the data platform that Segment has built to handle multiplexing continuous streams of data from multiple sources to multiple destinations
Interview
Introduction
How did you get involved in the area of data management?
Can you start by explaining what Segment is and how the business got started?
What are some of the primary ways that your customers are using the Segment platform?
How have the capabilities and use cases of the Segment platform changed since it was first launched?
Layered on top of the data integration platform you have added the concepts of Protocols and Personas. Can you explain how each of those products fit into the overall structure of Segment and the driving force behind their design and use?
What are some of the best practices for structuring custom events in a way that they can be easily integrated with downstream platforms?
How do you manage changes or errors in the events generated by the various sources that you support?
How is the Segment platform architected and how has that architecture evolved over the past few years?
What are some of the unique challenges that you face as a result of being a many-to-many event routing platform?
In addition to the various services that you integrate with for data delivery, you also support populating of data warehouses. What is involved in establishing and maintaining the schema and transformations for a customer?
What have been some of the most interesting, unexpected, and/or challenging lessons that you have learned while building and growing the technical and business aspects of Segment?
What are some of the features and improvements, both technical and business, that you have planned for the future?
Contact Info
LinkedIn
@calvinfo on Twitter
Website
calvinfo on GitHub
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Links
Segment
AWS
ClassMetric
Y Combinator
Amplitude web and mobile analytics
Mixpanel
Kiss Metrics
Hacker News
Segment Connections
User Analytics
SalesForce
Redshift
BigQuery
Kinesis
Google Cloud PubSub
Segment Protocols data governance product
Segment Personas
Heap Analytics
Podcast Episode
Hotel Tonight
Golang
Kafka
GDPR
RocksDB
Dead Letter Queue
Segment Centrifuge
Webhook
Google Analytics
Intercom
Stripe
GRPC
DynamoDB
FoundationDB
Parquet
Podcast Episode
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

Feb 25, 2019 • 43min
Deep Learning For Data Engineers
Summary
Deep learning is the latest class of technology that is gaining widespread interest. As data engineers we are responsible for building and managing the platforms that power these models. To help us understand what is involved, we are joined this week by Thomas Henson. In this episode he shares his experiences experimenting with deep learning, what data engineers need to know about the infrastructure and data requirements to power the models that your team is building, and how it can be used to supercharge our ETL pipelines.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show!
Managing and auditing access to your servers and databases is a problem that grows in difficulty alongside the growth of your teams. If you are tired of wasting your time cobbling together scripts and workarounds to give your developers, data scientists, and managers the permissions that they need then it’s time to talk to our friends at strongDM. They have built an easy to use platform that lets you leverage your company’s single sign on for your data platform. Go to dataengineeringpodcast.com/strongdm today to find out how you can simplify your systems.
Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
To help other people find the show please leave a review on iTunes, or Google Play Music, tell your friends and co-workers, and share it on social media.
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data platforms. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss the Strata conference in San Francisco on March 25th and the Artificial Intelligence conference in NYC on April 15th, both run by our friends at O’Reilly Media. Go to dataengineeringpodcast.com/stratacon and dataengineeringpodcast.com/aicon to register today and get 20% off
Your host is Tobias Macey and today I’m interviewing Thomas Henson about what data engineers need to know about deep learning, including how to use it for their own projects
Interview
Introduction
How did you get involved in the area of data management?
Can you start by giving an overview of what deep learning is for anyone who isn’t familiar with it?
What has been your personal experience with deep learning and what set you down that path?
What is involved in building a data pipeline and production infrastructure for a deep learning product?
How does that differ from other types of analytics projects such as data warehousing or traditional ML?
For anyone who is in the early stages of a deep learning project, what are some of the edge cases or gotchas that they should be aware of?
What are your opinions on the level of involvement/understanding that data engineers should have with the analytical products that are being built with the information we collect and curate?
What are some ways that we can use deep learning as part of the data management process?
How does that shift the infrastructure requirements for our platforms?
Cloud providers have been releasing numerous products to provide deep learning and/or GPUs as a managed platform. What are your thoughts on that layer of the build vs buy decision?
What is your litmus test for whether to use deep learning vs explicit ML algorithms or a basic decision tree?
Deep learning algorithms are often a black box in terms of how decisions are made, however regulations such as GDPR are introducing requirements to explain how a given decision gets made. How does that factor into determining what approach to take for a given project?
For anyone who wants to learn more about deep learning, what are some resources that you recommend?
Contact Info
Website
Pluralsight
@henson_tm on Twitter
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Links
Pluralsight
Dell EMC
Hadoop
DBA (Database Administrator)
Elasticsearch
Podcast Episode
Spark
Podcast Episode
MapReduce
Deep Learning
Machine Learning
Neural Networks
Feature Engineering
SVD (Singular Value Decomposition)
Andrew Ng
Machine Learning Course
Unstructured Data Solutions Team of Dell EMC
Tensorflow
PyTorch
GPU (Graphics Processing Unit)
Nvidia RAPIDS
Project Hydrogen
Submarine
ETL (Extract, Transform, Load)
Supervised Learning
Unsupervised Learning
Apache Kudu
Podcast Episode
CNN (Convolutional Neural Network)
Sentiment Analysis
DataRobot
GDPR
Weapons Of Math Destruction by Cathy O’Neil
Backpropagation
Deep Learning Bootcamps
Thomas Henson Tensorflow Course on Pluralsight
TFLearn
Google ML Bootcamp
Caffe deep learning framework
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

Feb 19, 2019 • 60min
Speed Up Your Analytics With The Alluxio Distributed Storage System
Summary
Distributed storage systems are the foundational layer of any big data stack. There are a variety of implementations which support different specialized use cases and come with associated tradeoffs. Alluxio is a distributed virtual filesystem which integrates with multiple persistent storage systems to provide a scalable, in-memory storage layer for scaling computational workloads independent of the size of your data. In this episode Bin Fan explains how he got involved with the project, how it is implemented, and the use cases that it is particularly well suited for. If your storage and compute layers are too tightly coupled and you want to scale them independently then Alluxio is the tool for the job.
Introduction
Hello and welcome to the Data Engineering Podcast, the show about modern data management
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute.
Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
To help other people find the show please leave a review on iTunes, or Google Play Music, tell your friends and co-workers, and share it on social media.
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Your host is Tobias Macey and today I’m interviewing Bin Fan about Alluxio, a distributed virtual filesystem for unified access to disparate data sources
Interview
Introduction
How did you get involved in the area of data management?
Can you start by explaining what Alluxio is and the history of the project?
What are some of the use cases that Alluxio enables?
How is Alluxio implemented and how has its architecture evolved over time?
What are some of the techniques that you use to mitigate the impact of latency, particularly when interfacing with storage systems across cloud providers and private data centers?
When dealing with large volumes of data over time it is often necessary to age out older records to cheaper storage. What capabilities does Alluxio provide for that lifecycle management?
What are some of the most complex or challenging aspects of providing a unified abstraction across disparate storage platforms?
What are the tradeoffs that are made to provide a single API across systems with varying capabilities?
Testing and verification of distributed systems is a complex undertaking. Can you describe the approach that you use to ensure proper functionality of Alluxio as part of the development and release process?
In order to allow for this large scale testing with any regularity it must be straightforward to deploy and configure Alluxio. What are some of the mechanisms that you have built into the platform to simplify the operational aspects?
Can you describe a typical system topology that incorporates Alluxio?
For someone planning a deployment of Alluxio, what should they be considering in terms of system requirements and deployment topologies?
What are some edge cases or operational complexities that they should be aware of?
What are some cases where Alluxio is the wrong choice?
What are some projects or products that provide a similar capability to Alluxio?
What do you have planned for the future of the Alluxio project and company?
Contact Info
LinkedIn
@binfan on Twitter
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Links
Alluxio
Project
Company
Carnegie Mellon University
Memcached
Key/Value Storage
UC Berkeley AMPLab
Apache Spark
Podcast Episode
Presto
Podcast Episode
Tensorflow
HDFS
LRU Cache
Hive Metastore
Iceberg Table Format
Podcast Episode
Java
Dependency Hell
Java Class Loader
Apache Zookeeper
Podcast Interview
Raft Consensus Algorithm
Consistent Hashing
Alluxio Testing At Scale Blog Post
S3Guard
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

Feb 11, 2019 • 48min
Machine Learning In The Enterprise
Summary
Machine learning is a class of technologies that promise to revolutionize business. Unfortunately, it can be difficult to identify and execute on ways that it can be used in large companies. Kevin Dewalt founded Prolego to help Fortune 500 companies build, launch, and maintain their first machine learning projects so that they can remain competitive in our landscape of constant change. In this episode he discusses why machine learning projects require a new set of capabilities, how to build a team from internal and external candidates, and how an example project progressed through each phase of maturity. This was a great conversation for anyone who wants to understand the benefits and tradeoffs of machine learning for their own projects and how to put it into practice.
Introduction
Hello and welcome to the Data Engineering Podcast, the show about modern data management
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute.
Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
To help other people find the show please leave a review on iTunes, or Google Play Music, tell your friends and co-workers, and share it on social media.
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Your host is Tobias Macey and today I’m interviewing Kevin Dewalt about his experiences at Prolego, building machine learning projects for Fortune 500 companies
Interview
Introduction
How did you get involved in the area of data management?
For the benefit of software engineers and team leaders who are new to machine learning, can you briefly describe what machine learning is and why is it relevant to them?
What is your primary mission at Prolego and how did you identify, execute on, and establish a presence in your particular market?
How much of your sales process is spent on educating your clients about what AI or ML are and the benefits that these technologies can provide?
What have you found to be the technical skills and capacity necessary for being successful in building and deploying a machine learning project?
When engaging with a client, what have you found to be the most common areas of technical capacity or knowledge that are needed?
Everyone talks about a talent shortage in machine learning. Can you suggest a recruiting or skills development process for companies which need to build out their data engineering practice?
What challenges will teams typically encounter when creating an efficient working relationship between data scientists and data engineers?
Can you briefly describe a successful project of developing a first ML model and putting it into production?
What is the breakdown of how much time was spent on different activities such as data wrangling, model development, and data engineering pipeline development?
When releasing to production, can you share the types of metrics that you track to ensure the health and proper functioning of the models?
What does a deployable artifact for a machine learning/deep learning application look like?
What basic technology stack is necessary for putting the first ML models into production?
How does the build vs. buy debate break down in this space and what products do you typically recommend to your clients?
What are the major risks associated with deploying ML models and how can a team mitigate them?
Suppose a software engineer wants to break into ML. What data engineering skills would you suggest they learn? How should they position themselves for the right opportunity?
Contact Info
Email: Kevin Dewalt kevin@prolego.io and Russ Rands russ@prolego.io
Connect on LinkedIn: Kevin Dewalt and Russ Rands
Twitter: @kevindewalt
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Links
Prolego
Download our book: Become an AI Company in 90 Days
Google Rules Of ML
AI Winter
Machine Learning
Supervised Learning
O’Reilly Strata Conference
GE Rebranding Commercials
Jez Humble: Stop Hiring Devops Experts (And Start Growing Them)
SQL
ORM
Django
RoR
Tensorflow
PyTorch
Keras
Data Engineering Podcast Episode About Data Teams
DevOps For Data Teams – DevOps Days Boston Presentation by Tobias
Jupyter Notebook
Data Engineering Podcast: Notebooks at Netflix
Pandas
Podcast Interview
Joel Grus
JupyterCon Presentation
Data Science From Scratch
Expensify
Airflow
James Meickle Interview
Git
Jenkins
Continuous Integration
Practical Deep Learning For Coders Course by Jeremy Howard
Data Carpentry
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

Feb 4, 2019 • 1h 1min
Cleaning And Curating Open Data For Archaeology
Summary
Archaeologists collect and create a variety of data as part of their research and exploration. Open Context is a platform for cleaning, curating, and sharing this data. In this episode Eric Kansa describes how they process, clean, and normalize the data that they host, the challenges that they face with scaling ETL processes which require domain specific knowledge, and how the information contained in connections that they expose is being used for interesting projects.
Introduction
Hello and welcome to the Data Engineering Podcast, the show about modern data management
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute.
Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
To help other people find the show please leave a review on iTunes, or Google Play Music, tell your friends and co-workers, and share it on social media.
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Your host is Tobias Macey and today I’m interviewing Eric Kansa about Open Context, a platform for publishing, managing, and sharing research data
Interview
Introduction
How did you get involved in the area of data management?
I did some database and GIS work for my dissertation in archaeology, back in the late 1990’s. I got frustrated at the lack of comparative data, and I got frustrated at all the work I put into creating data that nobody would likely use. So I decided to focus my energies in research data management.
Can you start by describing what Open Context is and how it started?
Open Context is an open access data publishing service for archaeology. It started because we need better ways of dissminating structured data and digital media than is possible with conventional articles, books and reports.
What are your protocols for determining which data sets you will work with?
Datasets need to come from research projects that meet the normal standards of professional conduct (laws, ethics, professional norms) articulated by archaeology’s professional societies.
What are some of the challenges unique to research data?
What are some of the unique requirements for processing, publishing, and archiving research data?
You have to work on a shoe-string budget, essentially providing "public goods". Archaeologists typically don’t have much discretionary money available, and publishing and archiving data are not yet very common practices.
Another issues is that it will take a long time to publish enough data to power many "meta-analyses" that draw upon many datasets. The issue is that lots of archaeological data describes very particular places and times. Because datasets can be so particularistic, finding data relevant to your interests can be hard. So, we face a monumental task in supplying enough data to satisfy many, many paricularistic interests.
How much education is necessary around your content licensing for researchers who are interested in publishing their data with you?
We require use of Creative Commons licenses, and greatly encourage the CC-BY license or CC-Zero (public domain) to try to keep things simple and easy to understand.
Can you describe the system architecture that you use for Open Context?
Open Context is a Django Python application, with a Postgres database and an Apache Solr index. It’s running on Google cloud services on a Debian linux.
What is the process for cleaning and formatting the data that you host?
How much domain expertise is necessary to ensure proper conversion of the source data?
That’s one of the bottle necks. We have to do an ETL (extract transform load) on each dataset researchers submit for publication. Each dataset may need lots of cleaning and back and forth conversations with data creators.
Can you discuss the challenges that you face in maintaining a consistent ontology?
What pieces of metadata do you track for a given data set?
Can you speak to the average size of data sets that you manage and any approach that you use to optimize for cost of storage and processing capacity?
Can you walk through the lifecycle of a given data set?
Data archiving is a complicated and difficult endeavor due to issues pertaining to changing data formats and storage media, as well as repeatability of computing environments to generate and/or process them. Can you discuss the technical and procedural approaches that you take to address those challenges?
Once the data is stored you expose it for public use via a set of APIs which support linked data. Can you discuss any complexities that arise from needing to identify and expose interrelations between the data sets?
What are some of the most interesting uses you have seen of the data that is hosted on Open Context?
What have been some of the most interesting/useful/challenging lessons that you have learned while working on Open Context?
What are your goals for the future of Open Context?
Contact Info
@ekansa on Twitter
LinkedIn
ResearchGate
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Links
Open Context
Bronze Age
GIS (Geographic Information System)
Filemaker
Access Database
Excel
Creative Commons
Open Context On Github
Django
PostgreSQL
Apache Solr
GeoJSON
JSON-LD
RDF
OCHRE
SKOS (Simple Knowledge Organization System)
Django Reversion
California Digital Library
Zenodo
CERN
Digital Index of North American Archaeology (DINAA)
Ansible
Docker
OpenRefine
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

Jan 29, 2019 • 42min
Managing Database Access Control For Teams With strongDM
Summary
Controlling access to a database is a solved problem… right? It can be straightforward for small teams and a small number of storage engines, but once either or both of those start to scale then things quickly become complex and difficult to manage. After years of running across the same issues in numerous companies and even more projects Justin McCarthy built strongDM to solve database access management for everyone. In this episode he explains how the strongDM proxy works to grant and audit access to storage systems and the benefits that it provides to engineers and team leads.
Introduction
Hello and welcome to the Data Engineering Podcast, the show about modern data management
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute.
Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
To help other people find the show please leave a review on iTunes, or Google Play Music, tell your friends and co-workers, and share it on social media.
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Your host is Tobias Macey and today I’m interviewing Justin McCarthy about StrongDM, a hosted service that simplifies access controls for your data
Interview
Introduction
How did you get involved in the area of data management?
Can you start by explaining the problem that StrongDM is solving and how the company got started?
What are some of the most common challenges around managing access and authentication for data storage systems?
What are some of the most interesting workarounds that you have seen?
Which areas of authentication, authorization, and auditing are most commonly overlooked or misunderstood?
Can you describe the architecture of your system?
What strategies have you used to enable interfacing with such a wide variety of storage systems?
What additional capabilities do you provide beyond what is natively available in the underlying systems?
What are some of the most difficult aspects of managing varying levels of permission for different roles across the diversity of platforms that you support, given that they each have different capabilities natively?
For a customer who is onboarding, what is involved in setting up your platform to integrate with their systems?
What are some of the assumptions that you made about your problem domain and market when you first started which have been disproven?
How do organizations in different industries react to your product and how do their policies around granting access to data differ?
What are some of the most interesting/unexpected/challenging lessons that you have learned in the process of building and growing StrongDM?
Contact Info
LinkedIn
@justinm on Twitter
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Links
StrongDM
Authentication Vs. Authorization
Hashicorp Vault
Configuration Management
Chef
Puppet
SaltStack
Ansible
Okta
SSO (Single Sign On
SOC 2
Two Factor Authentication
SSH (Secure SHell)
RDP
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

Jan 21, 2019 • 48min
Building Enterprise Big Data Systems At LEGO
Summary
Building internal expertise around big data in a large organization is a major competitive advantage. However, it can be a difficult process due to compliance needs and the need to scale globally on day one. In this episode Jesper Søgaard and Keld Antonsen share the story of starting and growing the big data group at LEGO. They discuss the challenges of being at global scale from the start, hiring and training talented engineers, prototyping and deploying new systems in the cloud, and what they have learned in the process. This is a useful conversation for engineers, managers, and leadership who are interested in building enterprise big data systems.
Preamble
Hello and welcome to the Data Engineering Podcast, the show about modern data management
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute.
Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
To help other people find the show please leave a review on iTunes, or Google Play Music, tell your friends and co-workers, and share it on social media.
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Your host is Tobias Macey and today I’m interviewing Keld Antonsen and Jesper Soegaard about the data infrastructure and analytics that powers LEGO
Interview
Introduction
How did you get involved in the area of data management?
My understanding is that the big data group at LEGO is a fairly recent development. Can you share the story of how it got started?
What kinds of data practices were in place prior to starting a dedicated group for managing the organization’s data?
What was the transition process like, migrating data silos into a uniformly managed platform?
What are the biggest data challenges that you face at LEGO?
What are some of the most critical sources and types of data that you are managing?
What are the main components of the data infrastructure that you have built to support the organizations analytical needs?
What are some of the technologies that you have found to be most useful?
Which have been the most problematic?
What does the team structure look like for the data services at LEGO?
Does that reflect in the types/numbers of systems that you support?
What types of testing, monitoring, and metrics do you use to ensure the health of the systems you support?
What have been some of the most interesting, challenging, or useful lessons that you have learned while building and maintaining the data platforms at LEGO?
How have the data systems at Lego evolved over recent years as new technologies and techniques have been developed?
How does the global nature of the LEGO business influence the design strategies and technology choices for your platform?
What are you most excited for in the coming year?
Contact Info
Jesper
LinkedIn
Keld
LinkedIn
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Links
LEGO Group
ERP (Enterprise Resource Planning)
Predictive Analytics
Prescriptive Analytics
Hadoop
Center Of Excellence
Continuous Integration
Spark
Podcast Episode
Apache NiFi
Podcast Episode
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

Jan 14, 2019 • 41min
TimescaleDB: The Timeseries Database Built For SQL And Scale - Episode 65
TimescaleDB CEO and CTO talk about the 1.0 release, increasing demand for time series databases, distinctions between TimescaleDB and PipelineDB, challenges in reaching the 1.0 release, flexibility of TimeScaleDB, and future plans for scaling and automation.

Jan 7, 2019 • 51min
Performing Fast Data Analytics Using Apache Kudu - Episode 64
Summary
The Hadoop platform is purpose built for processing large, slow moving data in long-running batch jobs. As the ecosystem around it has grown, so has the need for fast data analytics on fast moving data. To fill this need the Kudu project was created with a column oriented table format that was tuned for high volumes of writes and rapid query execution across those tables. For a perfect pairing, they made it easy to connect to the Impala SQL engine. In this episode Brock Noland and Jordan Birdsell from PhData explain how Kudu is architected, how it compares to other storage systems in the Hadoop orbit, and how to start integrating it into you analytics pipeline.
Preamble
Hello and welcome to the Data Engineering Podcast, the show about modern data management
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute.
Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.
To help other people find the show please leave a review on iTunes, or Google Play Music, tell your friends and co-workers, and share it on social media.
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Your host is Tobias Macey and today I’m interviewing Brock Noland and Jordan Birdsell about Apache Kudu and how it is able to provide fast analytics on fast data in the Hadoop ecosystem
Interview
Introduction
How did you get involved in the area of data management?
Can you start by explaining what Kudu is and the motivation for building it?
How does it fit into the Hadoop ecosystem?
How does it compare to the work being done on the Iceberg table format?
What are some of the common application and system design patterns that Kudu supports?
How is Kudu architected and how has it evolved over the life of the project?
There are many projects in and around the Hadoop ecosystem that rely on Zookeeper as a building block for consensus. What was the reasoning for using Raft in Kudu?
How does the storage layer in Kudu differ from what would be found in systems like Hive or HBase?
What are the implementation details in the Kudu storage interface that have had the greatest impact on its overall speed and performance?
A number of the projects built for large scale data processing were not initially built with a focus on operational simplicity. What are the features of Kudu that simplify deployment and management of production infrastructure?
What was the motivation for using C++ as the language target for Kudu?
If you were to start the project over today what would you do differently?
What are some situations where you would advise against using Kudu?
What have you found to be the most interesting/unexpected/challenging lessons learned in the process of building and maintaining Kudu?
What are you most excited about for the future of Kudu?
Contact Info
Brock
LinkedIn
@brocknoland on Twitter
Jordan
LinkedIn
@jordanbirdsell
jbirdsell on GitHub
PhData
Website
phdata on GitHub
@phdatainc on Twitter
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Links
Kudu
PhData
Getting Started with Apache Kudu
Thomson Reuters
Hadoop
Oracle Exadata
Slowly Changing Dimensions
HDFS
S3
Azure Blob Storage
State Farm
Stanly Black & Decker
ETL (Extract, Transform, Load)
Parquet
Podcast Episode
ORC
HBase
Spark
Podcast Episode
Impala
Netflix Iceberg
Podcast Episode
Hive ACID
IOT (Internet Of Things)
Streamsets
NiFi
Podcast Episode
Kafka Connect
Moore’s Law
3D XPoint
Raft Consensus Algorithm
STONITH (Shoot The Other Node In The Head)
Yarn
Cython
Podcast.__init__ Episode
Pandas
Podcast.__init__ Episode
Cloudera Manager
Apache Sentry
Collibra
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SASupport Data Engineering Podcast


