

Data Engineering Podcast
Tobias Macey
This show goes behind the scenes for the tools, techniques, and difficulties associated with the discipline of data engineering. Databases, workflows, automation, and data manipulation are just some of the topics that you will find here.
Episodes
Mentioned books

Dec 15, 2020 • 1h 5min
Building A Self Service Data Platform For Alternative Data Analytics At YipitData
Summary
As a data engineer you’re familiar with the process of collecting data from databases, customer data platforms, APIs, etc. At YipitData they rely on a variety of alternative data sources to inform investment decisions by hedge funds and businesses. In this episode Andrew Gross, Bobby Muldoon, and Anup Segu describe the self service data platform that they have built to allow data analysts to own the end-to-end delivery of data projects and how that has allowed them to scale their output. They share the journey that they went through to build a scalable and maintainable system for web scraping, how to make it reliable and resilient to errors, and the lessons that they learned in the process. This was a great conversation about real world experiences in building a successful data-oriented business.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
What are the pieces of advice that you wish you had received early in your career of data engineering? If you hand a book to a new data engineer, what wisdom would you add to it? I’m working with O’Reilly on a project to collect the 97 things that every data engineer should know, and I need your help. Go to dataengineeringpodcast.com/97things to add your voice and share your hard-earned expertise.
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $60 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Are you bogged down by having to manually manage data access controls, repeatedly move and copy data, and create audit reports to prove compliance? How much time could you save if those tasks were automated across your cloud platforms? Immuta is an automated data governance solution that enables safe and easy data analytics in the cloud. Our comprehensive data-level security, auditing and de-identification features eliminate the need for time-consuming manual processes and our focus on data and compliance team collaboration empowers you to deliver quick and valuable data analytics on the most sensitive data to unlock the full potential of your cloud data platforms. Learn how we streamline and accelerate manual processes to help you derive real results from your data at dataengineeringpodcast.com/immuta.
Today’s episode of the Data Engineering Podcast is sponsored by Datadog, a SaaS-based monitoring and analytics platform for cloud-scale infrastructure, applications, logs, and more. Datadog uses machine-learning based algorithms to detect errors and anomalies across your entire stack—which reduces the time it takes to detect and address outages and helps promote collaboration between Data Engineering, Operations, and the rest of the company. Go to dataengineeringpodcast.com/datadog today to start your free 14 day trial. If you start a trial and install Datadog’s agent, Datadog will send you a free T-shirt.
Your host is Tobias Macey and today I’m interviewing Andrew Gross, Bobby Muldoon, and Anup Segu about they are building pipelines at Yipit Data
Interview
Introduction
How did you get involved in the area of data management?
Can you start by giving an overview of what YipitData does?
What kinds of data sources and data assets are you working with?
What is the composition of your data teams and how are they structured?
Given the use of your data products in the financial sector how do you handle monitoring and alerting around data quality?
For web scraping in particular, given how fragile it can be, what have you done to make it a reliable and repeatable part of the data pipeline?
Can you describe how your data platform is implemented?
How has the design of your platform and its goals evolved or changed?
What is your guiding principle for providing an approachable interface to analysts?
How much knowledge do your analysts require about the guarantees offered, and edge cases to be aware of in the underlying data and its processing?
What are some examples of specific tools that you have built to empower your analysts to own the full lifecycle of the data that they are working with?
Can you characterize or quantify the benefits that you have seen from training the analysts to work with the engineering tool chain?
What have been some of the most interesting, unexpected, or surprising outcomes of how you are approaching the different responsibilities and levels of ownership in your data organization?
What are some of the most interesting, unexpected, or challenging lessons that you have learned from building out the platform, tooling, and organizational structure for creating data products at Yipit?
What advice or recommendations do you have for other leaders of data teams about how to think about the organizational and technical aspects of managing the lifecycle of data projects?
Contact Info
Andrew
LinkedIn
@awgross on Twitter
Bobby
LinkedIn
@TheDooner64
Anup
LinkedIn
anup-segu on GitHub
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Links
Yipit Data
Redshift
MySQL
Airflow
Databricks
Groupon
Living Social
Web Scraping
Podcast.__init__ Episode
Readypipe
Graphite
Podcast.init Episode
AWS Kinesis Firehose
Parquet
Papermill
Podcast Episode About Notebooks At Netflix
Fivetran
Podcast Episode
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

Dec 7, 2020 • 1h 13min
Proven Patterns For Building Successful Data Teams
Summary
Building data products are complicated by the fact that there are so many different stakeholders with competing goals and priorities. It is also challenging because of the number of roles and capabilities that are necessary to go from idea to delivery. Different organizations have tried a multitude of organizational strategies to improve the success rate of these data teams with varying levels of success. In this episode Jesse Anderson shares the lessons that he has learned while working with dozens of businesses across industries to determine the team structures and communication styles that have generated the best results. If you are struggling to deliver value from big data, or just starting down the path of building the organizational capacity to turn raw information into valuable products then this is a conversation that you don’t want to miss.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
What are the pieces of advice that you wish you had received early in your career of data engineering? If you hand a book to a new data engineer, what wisdom would you add to it? I’m working with O’Reilly on a project to collect the 97 things that every data engineer should know, and I need your help. Go to dataengineeringpodcast.com/97things to add your voice and share your hard-earned expertise.
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $60 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Are you bogged down by having to manually manage data access controls, repeatedly move and copy data, and create audit reports to prove compliance? How much time could you save if those tasks were automated across your cloud platforms? Immuta is an automated data governance solution that enables safe and easy data analytics in the cloud. Our comprehensive data-level security, auditing and de-identification features eliminate the need for time-consuming manual processes and our focus on data and compliance team collaboration empowers you to deliver quick and valuable data analytics on the most sensitive data to unlock the full potential of your cloud data platforms. Learn how we streamline and accelerate manual processes to help you derive real results from your data at dataengineeringpodcast.com/immuta.
Today’s episode of the Data Engineering Podcast is sponsored by Datadog, a SaaS-based monitoring and analytics platform for cloud-scale infrastructure, applications, logs, and more. Datadog uses machine-learning based algorithms to detect errors and anomalies across your entire stack—which reduces the time it takes to detect and address outages and helps promote collaboration between Data Engineering, Operations, and the rest of the company. Go to dataengineeringpodcast.com/datadog today to start your free 14 day trial. If you start a trial and install Datadog’s agent, Datadog will send you a free T-shirt.
Your host is Tobias Macey and today I’m interviewing Jesse Anderson about best practices for organizing and managing data teams
Interview
Introduction
How did you get involved in the area of data management?
Can you start by giving an overview of how you view the mission and responsibilities of a data team?
What are the critical elements of a successful data team?
Beyond the core pillars of data science, data engineering, and operations, what other specialized roles do you find helpful for larger or more sophisticated teams?
For organizations that have "small data", how does that change the necessary composition of roles for successful data projects?
What are the signs and symptoms that point to the need for a dedicated team that focuses on data?
With data scientists and data engineers in particular being in such high demand, what are strategies that you have found effective for attracting new talent?
In the case where you have engineers on staff, how do you identify internal talent that can be trained into these specialized roles?
Another challenge that organizations face in dealing with data is how the team is organized. What are your thoughts on effective strategies for how to structure the communication and reporting structures of data teams? (e.g. centralized, embedded, etc.)
How do you recommend evaluating potential candidates for each of the necessary roles?
What are your thoughts on when to hire an outside consultant, vs building internal capacity?
For managers who are responsible for data teams, how much understanding of data and analytics do they need to be effective?
How do you define success or measure performance of a team focused on working with data?
What are some of the anti-patterns that you have seen in managers who oversee data professionals?
What are some of the most interesting, unexpected, or challenging lessons that you have learned in the process of helping organizations and individuals achieve success in data and analytics?
What advice or additional resources do you have for anyone who is interested in learning more about how to build and grow a successful data team?
Contact Info
Website
@jessetanderson on Twitter
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Links
Data Teams Book
DBA == Database Administrator
ML Engineer
DataOps
Three Vs
The Ultimate Guide To Switching Careers To Big Data
S-1 Report
Jesse Anderson’s Youtube Channel
Video about interviewing for data teams
Uber Data Infrastructure Progression Blog Post
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

Nov 30, 2020 • 45min
Streaming Data Integration Without The Code at Equalum
Summary
The first stage of every good pipeline is to perform data integration. With the increasing pace of change and the need for up to date analytics the need to integrate that data in near real time is growing. With the improvements and increased variety of options for streaming data engines and improved tools for change data capture it is possible for data teams to make that goal a reality. However, despite all of the tools and managed distributions of those streaming engines it is still a challenge to build a robust and reliable pipeline for streaming data integration, especially if you need to expose those capabilities to non-engineers. In this episode Ido Friedman, CTO of Equalum, explains how they have built a no-code platform to make integration of streaming data and change data capture feeds easier to manage. He discusses the challenges that are inherent in the current state of CDC technologies, how they have architected their system to integrate well with existing data platforms, and how to build an appropriate level of abstraction for such a complex problem domain. If you are struggling with streaming data integration and change data capture then this interview is definitely worth a listen.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
What are the pieces of advice that you wish you had received early in your career of data engineering? If you hand a book to a new data engineer, what wisdom would you add to it? I’m working with O’Reilly on a project to collect the 97 things that every data engineer should know, and I need your help. Go to dataengineeringpodcast.com/97things to add your voice and share your hard-earned expertise.
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $60 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Modern Data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days. Datafold helps Data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Go to dataengineeringpodcast.com/datafold today to start a 30-day trial of Datafold. Once you sign up and create an alert in Datafold for your company data, they will send you a cool water flask.
Are you bogged down by having to manually manage data access controls, repeatedly move and copy data, and create audit reports to prove compliance? How much time could you save if those tasks were automated across your cloud platforms? Immuta is an automated data governance solution that enables safe and easy data analytics in the cloud. Our comprehensive data-level security, auditing and de-identification features eliminate the need for time-consuming manual processes and our focus on data and compliance team collaboration empowers you to deliver quick and valuable data analytics on the most sensitive data to unlock the full potential of your cloud data platforms. Learn how we streamline and accelerate manual processes to help you derive real results from your data at dataengineeringpodcast.com/immuta.
Your host is Tobias Macey and today I’m interviewing Ido Friedman about Equalum, a no-code platform for streaming data integration
Interview
Introduction
How did you get involved in the area of data management?
Can you start by giving an overview of what you are building at Equalum and how it got started?
There are a number of projects and platforms on the market that target data integration. Can you give some context of how Equalum fits in that market and the differentiating factors that engineers should consider?
What components of the data ecosystem might Equalum replace, and which are you designed to integrate with?
Can you walk through the workflow for someone who is using Equalum for a simple data integration use case?
What options are available for doing in-flight transformations of data or creating customized routing rules?
How do you handle versioning and staged rollouts of changes to pipelines?
How is the Equalum platform implemented?
How has the design and architecture of Equalum evolved since it was first created?
What have you found to be the most complex or challenging aspects of building the platform?
Change data capture is a growing area of interest, with a significant level of difficulty in implementing well. How do you handle support for the variety of different sources that customers are working with?
What are the edge cases that you typically run into when working with changes in databases?
How do you approach the user experience of the platform given its focus as a low code/no code system?
What options exist for sophisticated users to create custom operations?
How much of the underlying concerns do you surface to end users, and how much are you able to hide?
What is the process for a customer to integrate Equalum into their existing infrastructure and data systems?
What are some of the most interesting, unexpected, or innovative ways that you have seen Equalum used?
What are the most interesting, unexpected, or challenging lessons that you have learned while building and growing the Equalum platform?
When is Equalum the wrong choice?
What do you have planned for the future of Equalum?
Contact Info
LinkedIn
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Links
Equalum
Change Data Capture
Debezium Podcast Episode
SQL Server
DBA == Database Administrator
Fivetran
Podcast Episode
Singer
Pentaho
EMR
Snowflake
Podcast Episode
S3
Kafka
Spark
Prometheus
Grafana
Logminer
OBLP == Oracle Binary Log Parser
Ansible
Terraform
Jupyter Notebooks
Papermill
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

Nov 23, 2020 • 49min
Keeping A Bigeye On The Data Quality Market
Summary
One of the oldest aphorisms about data is "garbage in, garbage out", which is why the current boom in data quality solutions is no surprise. With the growth in projects, platforms, and services that aim to help you establish and maintain control of the health and reliability of your data pipelines it can be overwhelming to stay up to date with how they all compare. In this episode Egor Gryaznov, CTO of Bigeye, joins the show to explore the landscape of data quality companies, the general strategies that they are using, and what problems they solve. He also shares how his own product is designed and the challenges that are involved in building a system to help data engineers manage the complexity of a data platform. If you are wondering how to get better control of your own pipelines and the traps to avoid then this episode is definitely worth a listen.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
What are the pieces of advice that you wish you had received early in your career of data engineering? If you hand a book to a new data engineer, what wisdom would you add to it? I’m working with O’Reilly on a project to collect the 97 things that every data engineer should know, and I need your help. Go to dataengineeringpodcast.com/97things to add your voice and share your hard-earned expertise.
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $60 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Modern Data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days. Datafold helps Data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Go to dataengineeringpodcast.com/datafold today to start a 30-day trial of Datafold. Once you sign up and create an alert in Datafold for your company data, they will send you a cool water flask.
Are you bogged down by having to manually manage data access controls, repeatedly move and copy data, and create audit reports to prove compliance? How much time could you save if those tasks were automated across your cloud platforms? Immuta is an automated data governance solution that enables safe and easy data analytics in the cloud. Our comprehensive data-level security, auditing and de-identification features eliminate the need for time-consuming manual processes and our focus on data and compliance team collaboration empowers you to deliver quick and valuable data analytics on the most sensitive data to unlock the full potential of your cloud data platforms. Learn how we streamline and accelerate manual processes to help you derive real results from your data at dataengineeringpodcast.com/immuta.
Your host is Tobias Macey and today I’m interviewing Egor Gryaznov about the state of the industry for data quality management and what he is building at Bigeye.
Interview
Introduction
How did you get involved in the area of data management?
Can you start by sharing your views on what attributes you consider when defining data quality?
You use the term "data semantics" – can you elaborate on what that means?
What are the driving factors that contribute to the presence or lack of data quality in an organization or data platform?
Why do you think now is the right time to focus on data quality as an industry?
What are you building at Bigeye and how did it get started?
How does Bigeye help teams understand and manage their data quality?
What is the difference between existing data quality approaches and data observability?
What do you see as the tradeoffs for the approach that you are taking at Bigeye?
What are the most common data quality issues that you’ve seen and what are some more interesting ones that you wouldn’t expect?
Where do you see Bigeye fitting into the data management landscape? What are alternatives to Bigeye?
What are some of the most interesting, innovative, or unexpected ways that you have seen Bigeye being used?
What are some of the most interesting homegrown approaches that you have seen?
What have you found to be the most interesting, unexpected, or challenging lessons that you have learned while building the Bigeye platform and business?
What are the biggest trends you’re following in data quality management?
When is Bigeye the wrong choice?
What do you see in store for the future of Bigeye?
Contact Info
You can email Egor about anything data
LinkedIn
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Links
Bigeye
Uber
A/B Testing
Hadoop
MapReduce
Apache Impala
One King’s Lane
Vertica
Mode
Tableau
Jupyter Notebooks
Redshift
Snowflake
PyTorch
Podcast.__init__ Episode
Tensorflow
DataOps
DevOps
Data Catalog
DBT
Podcast Episode
SRE Handbook
Article About How Uber Applied SRE Principles to Data
SLA == Service Level Agreement
SLO == Service Level Objective
Dagster
Podcast Episode
Podcast.__init__ Episode
Delta Lake
Great Expectations
Podcast Episode
Podcast.__init__ Episode
Amundsen
Podcast Episode
Alation
Collibra
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

Nov 17, 2020 • 44min
Self Service Data Management From Ingest To Insights With Isima
Summary
The core mission of data engineers is to provide the business with a way to ask and answer questions of their data. This often takes the form of business intelligence dashboards, machine learning models, or APIs on top of a cleaned and curated data set. Despite the rapid progression of impressive tools and products built to fulfill this mission, it is still an uphill battle to tie everything together into a cohesive and reliable platform. At Isima they decided to reimagine the entire ecosystem from the ground up and built a single unified platform to allow end-to-end self service workflows from data ingestion through to analysis. In this episode CEO and co-founder of Isima Darshan Rawal explains how the biOS platform is architected to enable ease of use, the challenges that were involved in building an entirely new system from scratch, and how it can integrate with the rest of your data platform to allow for incremental adoption. This was an interesting and contrarian take on the current state of the data management industry and is worth a listen to gain some additional perspective.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
What are the pieces of advice that you wish you had received early in your career of data engineering? If you hand a book to a new data engineer, what wisdom would you add to it? I’m working with O’Reilly on a project to collect the 97 things that every data engineer should know, and I need your help. Go to dataengineeringpodcast.com/97things to add your voice and share your hard-earned expertise.
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $60 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Modern Data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days. Datafold helps Data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Follow go.datafold.com/dataengineeringpodcast to start a 30-day trial of Datafold. Once you sign up and create an alert in Datafold for your company data, they will send you a cool water flask.
Are you bogged down by having to manually manage data access controls, repeatedly move and copy data, and create audit reports to prove compliance? How much time could you save if those tasks were automated across your cloud platforms? Immuta is an automated data governance solution that enables safe and easy data analytics in the cloud. Our comprehensive data-level security, auditing and de-identification features eliminate the need for time-consuming manual processes and our focus on data and compliance team collaboration empowers you to deliver quick and valuable data analytics on the most sensitive data to unlock the full potential of your cloud data platforms. Learn how we streamline and accelerate manual processes to help you derive real results from your data at dataengineeringpodcast.com/immuta.
Your host is Tobias Macey and today I’m interviewing Darshan Rawal about Îsíma, a unified platform for building data applications
Interview
Introduction
How did you get involved in the area of data management?
Can you start by giving an overview of what you are building at Îsíma?
What was your motivation for creating a new platform for data applications?
What is the story behind the name?
What are the tradeoffs of a fully integrated platform vs a modular approach?
What components of the data ecosystem does Isima replace, and which does it integrate with?
What are the use cases that Isima enables which were previously impractical?
Can you describe how Isima is architected?
How has the design of the platform changed or evolved since you first began working on it?
What were your initial ideas or assumptions that have been changed or invalidated as you worked through the problem you’re addressing?
With a focus on the enterprise, how did you approach the user experience design to allow for organizational complexity?
One of the biggest areas of difficulty that many data systems face is security and scaleable access control. How do you tackle that problem in your platform?
How did you address the issue of geographical distribution of data and users?
Can you talk through the overall lifecycle of data as it traverses the bi(OS) platform from ingestion through to presentation?
What is the workflow for someone using bi(OS)?
What are some of the most interesting, innovative, or unexpected ways that you have seen bi(OS) used?
What have you found to be the most interesting, unexpected, or challenging aspects of building the bi(OS) platform?
When is it the wrong choice?
What do you have planned for the future of Isima and bi(OS)?
Contact Info
LinkedIn
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Links
Îsíma
Datastax
Verizon
AT&T
Click Fraud
ESB == Enterprise Service Bus
ETL == Extract, Transform, Load
EDW == Enterprise Data Warehouse
BI == Business Intelligence
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

Nov 10, 2020 • 52min
Building A Cost Effective Data Catalog With Tree Schema
Summary
A data catalog is a critical piece of infrastructure for any organization who wants to build analytics products, whether internal or external. While there are a number of platforms available for building that catalog, many of them are either difficult to deploy and integrate, or expensive to use at scale. In this episode Grant Seward explains how he built Tree Schema to be an easy to use and cost effective option for organizations to build their data catalogs. He also shares the internal architecture, how he approached the design to make it accessible and easy to use, and how it autodiscovers the schemas and metadata for your source systems.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
What are the pieces of advice that you wish you had received early in your career of data engineering? If you hand a book to a new data engineer, what wisdom would you add to it? I’m working with O’Reilly on a project to collect the 97 things that every data engineer should know, and I need your help. Go to dataengineeringpodcast.com/97things to add your voice and share your hard-earned expertise.
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $60 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Modern Data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days. Datafold helps Data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Follow go.datafold.com/dataengineeringpodcast to start a 30-day trial of Datafold. Once you sign up and create an alert in Datafold for your company data, they will send you a cool water flask.
Are you bogged down by having to manually manage data access controls, repeatedly move and copy data, and create audit reports to prove compliance? How much time could you save if those tasks were automated across your cloud platforms? Immuta is an automated data governance solution that enables safe and easy data analytics in the cloud. Our comprehensive data-level security, auditing and de-identification features eliminate the need for time-consuming manual processes and our focus on data and compliance team collaboration empowers you to deliver quick and valuable data analytics on the most sensitive data to unlock the full potential of your cloud data platforms. Learn how we streamline and accelerate manual processes to help you derive real results from your data at dataengineeringpodcast.com/immuta.
Your host is Tobias Macey and today I’m interviewing Grant Seward about Tree Schema, a human friendly data catalog
Interview
Introduction
How did you get involved in the area of data management?
Can you start by giving an overview of what you have built at Tree Schema?
What was your motivation for creating it?
At what stage of maturity should a team or organization consider a data catalog to be a necessary component in their data platform?
There are a large and growing number of projects and products designed to provide a data catalog, with each of them addressing the problem in a slightly different way. What are the necessary elements for a data catalog?
How does Tree Schema compare to the available options? (e.g. Amundsen, Company Wiki, Metacat, Metamapper, etc.)
How is the Tree Schema system implemented?
How has the design or direction of Tree Schema evolved since you first began working on it?
How did you approach the schema definitions for defining entities?
What was your guiding heuristic for determining how to design the interface and data models? – I wrote down notes that combine this with the question above
How do you handle integrating with data sources?
In addition to storing schema information you allow users to store information about the transformations being performed. How is that represented?
How can users populate information about their transformations in an automated fashion?
How do you approach evolution and versioning of schema information?
What are the scaling limitations of tree schema, whether in terms of the technical or cognitive complexity that it can handle?
What are some of the most interesting, innovative, or unexpected ways that you have seen Tree Schema being used?
What have you found to be the most interesting, unexpected, or challenging lessons learned in the process of building and promoting Tree Schema?
When is Tree Schema the wrong choice?
What do you have planned for the future of the product?
Contact Info
Email
Linkedin
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Links
Tree Schema
Tree Schema – Data Lineage as Code
Capital One
Walmart Labs
Data Catalog
Data Discovery
Amundsen
Metacat
Marquez
Metamapper
Infoworks
Collibra
Faust
Podcast.__init__ Episode
Django
PostgreSQL
Redis
Celery
Amazon ECS (Elastic Container Service)
Django Storages
Dagster
Airflow
DataHub
Avro
Singer
Apache Atlas
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

Nov 3, 2020 • 50min
Add Version Control To Your Data Lake With LakeFS
Summary
Data lakes are gaining popularity due to their flexibility and reduced cost of storage. Along with the benefits there are some additional complexities to consider, including how to safely integrate new data sources or test out changes to existing pipelines. In order to address these challenges the team at Treeverse created LakeFS to introduce version control capabilities to your storage layer. In this episode Einat Orr and Oz Katz explain how they implemented branching and merging capabilities for object storage, best practices for how to use versioning primitives to introduce changes to your data lake, how LakeFS is architected, and how you can start using it for your own data platform.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $60 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Are you bogged down by having to manually manage data access controls, repeatedly move and copy data, and create audit reports to prove compliance? How much time could you save if those tasks were automated across your cloud platforms? Immuta is an automated data governance solution that enables safe and easy data analytics in the cloud. Our comprehensive data-level security, auditing and de-identification features eliminate the need for time-consuming manual processes and our focus on data and compliance team collaboration empowers you to deliver quick and valuable data analytics on the most sensitive data to unlock the full potential of your cloud data platforms. Learn how we streamline and accelerate manual processes to help you derive real results from your data at dataengineeringpodcast.com/immuta.
Today’s episode of the Data Engineering Podcast is sponsored by Datadog, a SaaS-based monitoring and analytics platform for cloud-scale infrastructure, applications, logs, and more. Datadog uses machine-learning based algorithms to detect errors and anomalies across your entire stack—which reduces the time it takes to detect and address outages and helps promote collaboration between Data Engineering, Operations, and the rest of the company. Go to dataengineeringpodcast.com/datadog today to start your free 14 day trial. If you start a trial and install Datadog’s agent, Datadog will send you a free T-shirt.
Your host is Tobias Macey and today I’m interviewing Einat Orr and Oz Katz about their work at Treeverse on the LakeFS system for versioning your data lakes the same way you version your code.
Interview
Introduction
How did you get involved in the area of data management?
Can you start by giving an overview of what LakeFS is and why you built it?
There are a number of tools and platforms that support data virtualization and data versioning. How does LakeFS compare to the available options? (e.g. Alluxio, Denodo, Pachyderm, DVC, etc.)
What are the primary use cases that LakeFS enables?
For someone who wants to use LakeFS what is involved in getting it set up?
How is LakeFS implemented?
How has the design of the system changed or evolved since you began working on it?
What assumptions did you have going into it which have since been invalidated or modified?
How does the workflow for an engineer or analyst change from working directly against S3 to running against the LakeFS interface?
How do you handle merge conflicts and resolution?
What are some of the potential edge cases or foot guns that they should be aware of when there are multiple people using the same repository?
How do you approach management of the data lifecycle or garbage collection to avoid ballooning the cost of storage for a dataset that is tracking a high number of branches with diverging commits?
Given that S3 and GCS are eventually consistent storage layers, how do you handle snapshots/transactionality of the data you are working with?
What are the axes for scaling an installation of LakeFS?
What are the limitations in terms of size or geographic distribution of the datasets?
What are some of the most interesting, unexpected, or innovative ways that you have seen LakeFS being used?
What are the most interesting, unexpected, or challenging lessons that you have learned while building LakeFS?
When is LakeFS the wrong choice?
What do you have planned for the future of the project?
Contact Info
Einat Orr
Oz Katz
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com) with your story.
To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat
Links
Treeverse
LakeFS
GitHub
Documentation
lakeFS Slack Channel
SimilarWeb
Kaggle
DagsHub
Alluxio
Pachyderm
DVC
ML Ops (Machine Learning Operations)
DoltHub
Delta Lake
Podcast Episode
Hudi
Iceberg Table Format
Podcast Episode
Kubernetes
PostgreSQL
Podcast Episode
Git
Spark
Presto
CockroachDB
YugabyteDB
Citus
Hive Metastore
Iceberg Table Format
Immunai
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

6 snips
Oct 26, 2020 • 49min
Cloud Native Data Security As Code With Cyral
Summary
One of the most challenging aspects of building a data platform has nothing to do with pipelines and transformations. If you are putting your workflows into production, then you need to consider how you are going to implement data security, including access controls and auditing. Different databases and storage systems all have their own method of restricting access, and they are not all compatible with each other. In order to simplify the process of securing your data in the Cloud Manav Mital created Cyral to provide a way of enforcing security as code. In this episode he explains how the system is architected, how it can help you enforce compliance, and what is involved in getting it integrated with your existing systems. This was a good conversation about an aspect of data management that is too often left as an afterthought.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
What are the pieces of advice that you wish you had received early in your career of data engineering? If you hand a book to a new data engineer, what wisdom would you add to it? I’m working with O’Reilly on a project to collect the 97 things that every data engineer should know, and I need your help. Go to dataengineeringpodcast.com/97things to add your voice and share your hard-earned expertise.
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $60 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Are you bogged down by having to manually manage data access controls, repeatedly move and copy data, and create audit reports to prove compliance? How much time could you save if those tasks were automated across your cloud platforms? Immuta is an automated data governance solution that enables safe and easy data analytics in the cloud. Our comprehensive data-level security, auditing and de-identification features eliminate the need for time-consuming manual processes and our focus on data and compliance team collaboration empowers you to deliver quick and valuable data analytics on the most sensitive data to unlock the full potential of your cloud data platforms. Learn how we streamline and accelerate manual processes to help you derive real results from your data at dataengineeringpodcast.com/immuta.
Today’s episode of the Data Engineering Podcast is sponsored by Datadog, a SaaS-based monitoring and analytics platform for cloud-scale infrastructure, applications, logs, and more. Datadog uses machine-learning based algorithms to detect errors and anomalies across your entire stack—which reduces the time it takes to detect and address outages and helps promote collaboration between Data Engineering, Operations, and the rest of the company. Go to dataengineeringpodcast.com/datadog today to start your free 14 day trial. If you start a trial and install Datadog’s agent, Datadog will send you a free T-shirt.
You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data platforms. For more opportunities to stay up to date, gain new skills, and learn from your peers there are a growing number of virtual events that you can attend from the comfort and safety of your home. Go to dataengineeringpodcast.com/conferences to check out the upcoming events being offered by our partners and get registered today!
Your host is Tobias Macey and today I’m interviewing Manav Mital about the challenges involved in securing your data and the work that he is doing at Cyral to help address those problems.
Interview
Introduction
How did you get involved in the area of data management?
What is Cyral and what motivated you to build a business focused on addressing data security in the cloud?
Can you start by giving an overview of some of the common security issues that occur when working with data?
What new security challenges are introduced by building data platforms in public cloud environments?
What are the organizational roles that are typically responsible for managing security and access control to data sources and repositories?
What are the tensions, technical or organizational, that lead to a problematic or incomplete security posture?
What are the differences in security requirements and implementation complexity between software applications and data systems?
What are the data systems that Cyral integrates with?
How did you determine what platforms to prioritize?
How does Cyral integrate into the toolchains used to deploy, maintain, and upgrade an organization’s data infrastructure?
How does the Cyral platform address security and access control of data across an organization’s infrastructure?
How are schema changes handled when using Cyral to enforce access control to PII or other attributes?
How does Cyral help with reducing sprawl of data across unmonitored systems?
What are some of the most interesting, unexpected, or challenging lessons that you learned while building Cyral?
When is Cyral the wrong choice?
What do you have planned for the future of the Cyral platform?
Contact Info
LinkedIn
@manavrm on Twitter
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Links
Cyral
Snowflake
Podcast Episode
BigQuery
Object Storage
MongoDB
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

Oct 19, 2020 • 56min
Better Data Quality Through Observability With Monte Carlo
Summary
In order for analytics and machine learning projects to be useful, they require a high degree of data quality. To ensure that your pipelines are healthy you need a way to make them observable. In this episode Barr Moses and Lior Gavish, co-founders of Monte Carlo, share the leading causes of what they refer to as data downtime and how it manifests. They also discuss methods for gaining visibility into the flow of data through your infrastructure, how to diagnose and prevent potential problems, and what they are building at Monte Carlo to help you maintain your data’s uptime.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
What are the pieces of advice that you wish you had received early in your career of data engineering? If you hand a book to a new data engineer, what wisdom would you add to it? I’m working with O’Reilly on a project to collect the 97 things that every data engineer should know, and I need your help. Go to dataengineeringpodcast.com/97things to add your voice and share your hard-earned expertise.
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $60 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Are you bogged down by having to manually manage data access controls, repeatedly move and copy data, and create audit reports to prove compliance? How much time could you save if those tasks were automated across your cloud platforms? Immuta is an automated data governance solution that enables safe and easy data analytics in the cloud. Our comprehensive data-level security, auditing and de-identification features eliminate the need for time-consuming manual processes and our focus on data and compliance team collaboration empowers you to deliver quick and valuable data analytics on the most sensitive data to unlock the full potential of your cloud data platforms. Learn how we streamline and accelerate manual processes to help you derive real results from your data at dataengineeringpodcast.com/immuta.
Today’s episode of the Data Engineering Podcast is sponsored by Datadog, a SaaS-based monitoring and analytics platform for cloud-scale infrastructure, applications, logs, and more. Datadog uses machine-learning based algorithms to detect errors and anomalies across your entire stack—which reduces the time it takes to detect and address outages and helps promote collaboration between Data Engineering, Operations, and the rest of the company. Go to dataengineeringpodcast.com/datadog today to start your free 14 day trial. If you start a trial and install Datadog’s agent, Datadog will send you a free T-shirt.
You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data platforms. For more opportunities to stay up to date, gain new skills, and learn from your peers there are a growing number of virtual events that you can attend from the comfort and safety of your home. Go to dataengineeringpodcast.com/conferences to check out the upcoming events being offered by our partners and get registered today!
Your host is Tobias Macey and today I’m interviewing Barr Moses and Lior Gavish about observability for your data pipelines and how they are addressing it at Monte Carlo.
Interview
Introduction
How did you get involved in the area of data management?
How did you come up with the idea to found Monte Carlo?
What is "data downtime"?
Can you start by giving your definition of observability in the context of data workflows?
What are some of the contributing factors that lead to poor data quality at the different stages of the lifecycle?
Monitoring and observability of infrastructure and software applications is a well understood problem. In what ways does observability of data applications differ from "traditional" software systems?
What are some of the metrics or signals that we should be looking at to identify problems in our data applications?
Why is this the year that so many companies are working to address the issue of data quality and observability?
How are you addressing the challenge of bringing observability to data platforms at Monte Carlo?
What are the areas of integration that you are targeting and how did you identify where to prioritize your efforts?
For someone who is using Monte Carlo, how does the platform help them to identify and resolve issues in their data?
What stage of the data lifecycle have you found to be the biggest contributor to downtime and quality issues?
What are the most challenging systems, platforms, or tool chains to gain visibility into?
What are some of the most interesting, innovative, or unexpected ways that you have seen teams address their observability needs?
What are the most interesting, unexpected, or challenging lessons that you have learned while building the business and technology of Monte Carlo?
What are the alternatives to Monte Carlo?
What do you have planned for the future of the platform?
Contact Info
Visit www.montecarlodata.com?utm_source=rss&utm_medium=rss to lean more about our data reliability platform;
Or reach out directly to barr@montecarlodata.com — happy to chat about all things data!
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Links
Monte Carlo
Monte Carlo Platform
Observability
Gainsight
Barracuda Networks
DevOps
New Relic
Datadog
Netflix RAD Outlier Detection
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast

11 snips
Oct 12, 2020 • 1h 3min
Rapid Delivery Of Business Intelligence Using Power BI
Summary
Business intelligence efforts are only as useful as the outcomes that they inform. Power BI aims to reduce the time and effort required to go from information to action by providing an interface that encourages rapid iteration. In this episode Rob Collie shares his enthusiasm for the Power BI platform and how it stands out from other options. He explains how he helped to build the platform during his time at Microsoft, and how he continues to support users through his work at Power Pivot Pro. Rob shares some useful insights gained through his consulting work, and why he considers Power BI to be the best option on the market today for business analytics.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
What are the pieces of advice that you wish you had received early in your career of data engineering? If you hand a book to a new data engineer, what wisdom would you add to it? I’m working with O’Reilly on a project to collect the 97 things that every data engineer should know, and I need your help. Go to dataengineeringpodcast.com/97things to add your voice and share your hard-earned expertise.
When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $60 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
Are you bogged down by having to manually manage data access controls, repeatedly move and copy data, and create audit reports to prove compliance? How much time could you save if those tasks were automated across your cloud platforms? Immuta is an automated data governance solution that enables safe and easy data analytics in the cloud. Our comprehensive data-level security, auditing and de-identification features eliminate the need for time-consuming manual processes and our focus on data and compliance team collaboration empowers you to deliver quick and valuable data analytics on the most sensitive data to unlock the full potential of your cloud data platforms. Learn how we streamline and accelerate manual processes to help you derive real results from your data at dataengineeringpodcast.com/immuta.
Equalum’s end to end data ingestion platform is relied upon by enterprises across industries to seamlessly stream data to operational, real-time analytics and machine learning environments. Equalum combines streaming Change Data Capture, replication, complex transformations, batch processing and full data management using a no-code UI. Equalum also leverages open source data frameworks by orchestrating Apache Spark, Kafka and others under the hood. Tool consolidation and linear scalability without the legacy platform price tag. Go to dataengineeringpodcast.com/equalum today to start a free 2 week test run of their platform, and don’t forget to tell them that we sent you.
You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data platforms. For more opportunities to stay up to date, gain new skills, and learn from your peers there are a growing number of virtual events that you can attend from the comfort and safety of your home. Go to dataengineeringpodcast.com/conferences to check out the upcoming events being offered by our partners and get registered today!
Your host is Tobias Macey and today I’m interviewing Rob Collie about Microsoft’s Power BI platform and his work at Power Pivot Pro to help users employ it effectively.
Interview
Introduction
How did you get involved in the area of data management?
Can you start by giving an overview of what Power BI is?
The business intelligence market is fairly crowded. What are the features of Power BI that make it stand out?
Who are the target users of Power BI?
How does the design of the platform reflect those priorities?
Can you talk through the workflow for someone to build a report or dashboard in Power BI?
What is the broader ecosystem of data tools and platforms that Power BI sits within?
What are the available integration and extension points for Power BI?
In addition to your work at Microsoft building Power BI you now run a consulting company dedicated to helping people adopt that platform. What are some of the common challenges that users face in employing Power BI effectively?
In your experience working with clients, what are some of the core principles of data processing and visualization that apply across industries?
What are some of the modeling or presentation methods that are specific to a given industry?
One of the perennial challenges of business intelligence is to make reports discoverable. What facilities does Power BI have to aid in surfacing useful information to end users?
What capabilities does Power BI have for exposing elements of data quality?
What are some of the most challenging aspects of building and maintaining a business intelligence effort in an organization?
What are some of the most interesting, unexpected, or innovative uses of Power BI that you have seen, or projects that you have worked on?
What are some of the most interesting, unexpected, or challenging lessons that you have learned in your work building Power BI and building a business to support its users?
When is Power BI the wrong choice?
What trends in business intelligence are you most excited by?
Contact Info
LinkedIn
@robocolli3 on Twitter
Parting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Links
P3
Power BI
Microsoft Excel
Fantasy Football
Excel Functions
Lisp
Business Intelligence
VLOOKUP
Looker
Podcast Episode
SQL Server Reporting Services
SQL Server Analysis Services
Tableau
Master Data Management
ERP == Enterprise Resoure Planning
M Language
Power Query
DAX
The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Support Data Engineering Podcast


