Data Engineering Podcast

Tobias Macey
undefined
Jun 9, 2021 • 42min

Make Sure Your Records Are Reliable With The BookKeeper Distributed Storage Layer

Summary The way to build maintainable software and systems is through composition of individual pieces. By making those pieces high quality and flexible they can be used in surprising ways that the original creators couldn’t have imagined. One such component that has gone above and beyond its originally envisioned use case is BookKeeper, a distributed storage system that is optimized for durability and speed. In this episode Matteo Merli shares the story behind the creation of BookKeeper, the various ways that it is being used today, and the architectural aspects that make it such a strong building block for projects such as Pulsar. He also shares some of the other interesting systems that have been built on top of it and an amusing war story of running it at scale in its early years. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! RudderStack’s smart customer data pipeline is warehouse-first. It builds your customer data warehouse and your identity graph on your data warehouse, with support for Snowflake, Google BigQuery, Amazon Redshift, and more. Their SDKs and plugins make event streaming easy, and their integrations with cloud applications like Salesforce and ZenDesk help you go beyond event streaming. With RudderStack you can use all of your customer data to answer more difficult questions and then send those insights to your whole customer data stack. Sign up free at dataengineeringpodcast.com/rudder today. We’ve all been asked to help with an ad-hoc request for data by the sales and marketing team. Then it becomes a critical report that they need updated every week or every day. Then what do you do? Send a CSV via email? Write some Python scripts to automate it? But what about incremental sync, API quotas, error handling, and all of the other details that eat up your time? Today, there is a better way. With Census, just write SQL or plug in your dbt models and start syncing your cloud warehouse to SaaS applications like Salesforce, Marketo, Hubspot, and many more. Go to dataengineeringpodcast.com/census today to get a free 14-day trial. Your host is Tobias Macey and today I’m interviewing Matteo Merli about Apache BookKeeper, a scalable, fault-tolerant, and low-latency storage service optimized for real-time workloads Interview Introduction How did you get involved in the area of data management? Can you describe what BookKeeper is and the story behind it? What are the most notable features/capabilities of BookKeeper? What are some of the ways that BookKeeper is being used? How has your work on Pulsar influenced the features and product direction of BookKeeper? Can you describe the architecture of a BookKeeper cluster? How have the design and goals of BookKeeper changed or evolved over time? What is the impact of record-oriented storage on data distribution/allocation within the cluster when working with variable record sizes? What are some of the operational considerations that users should be aware of? What are some of the most interesting/compelling features from your perspective? What are some of the most often overlooked or misunderstood capabilities of BookKeeper? What are the most interesting, innovative, or unexpected ways that you have seen BookKeeper used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on BookKeeper? When is BookKeeper the wrong choice? What do you have planned for the future of BookKeeper? Contact Info LinkedIn @merlimat on Twitter merlimat on GitHub Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com) with your story. To help other people find the show please leave a review on iTunes and tell your friends and co-workers Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat Links Apache BookKeeper Apache Pulsar Podcast Episode StreamNative Podcast Episode Hadoop NameNode Apache Zookeeper Podcast Episode ActiveMQ Write Ahead Log (WAL) BookKeeper Architecture RocksDB LSM == Log-Structured Merge-Tree RAID Controller Pravega Podcast Episode BookKeeper etcd Metadata Storage LevelDB Ceph Podcast Episode Direct IO Page Cache The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Support Data Engineering Podcast
undefined
Jun 3, 2021 • 53min

Build Your Analytics With A Collaborative And Expressive SQL IDE Using Querybook

Summary SQL is the most widely used language for working with data, and yet the tools available for writing and collaborating on it are still clunky and inefficient. Frustrated with the lack of a modern IDE and collaborative workflow for managing the SQL queries and analysis of their big data environments, the team at Pinterest created Querybook. In this episode Justin Mejorada-Pier and Charlie Gu share the story of how the initial prototype for a data catalog ended up as one of their most widely used interfaces to their analytical data. They also discuss the unique combination of features that it offers, how it is implemented, and the path to releasing it as open source. Querybook is an impressive and unique piece of technology that is well worth exploring, so listen and try it out today. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! Firebolt is the fastest cloud data warehouse. Visit dataengineeringpodcast.com/firebolt to get started. The first 25 visitors will receive a Firebolt t-shirt. Atlan is a collaborative workspace for data-driven teams, like Github for engineering or Figma for design teams. By acting as a virtual hub for data assets ranging from tables and dashboards to SQL snippets & code, Atlan enables teams to create a single source of truth for all their data assets, and collaborate across the modern data stack through deep integrations with tools like Snowflake, Slack, Looker and more. Go to dataengineeringpodcast.com/atlan today and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $3000 on an annual subscription Your host is Tobias Macey and today I’m interviewing Justin Mejorada-Pier and Charlie Gu about Querybook, an open source IDE for your big data projects Interview Introduction How did you get involved in the area of data management? Can you describe what Querybook is and the story behind it? What are the main use cases or workflows that Querybook is designed for? What are the shortcomings of dashboarding/BI tools that make something like Querybook necessary? The tag line calls out the fact that Querybook is an IDE for "big data". What are the manifestations of that focus in the feature set and user experience? Who are the target users of Querybook and how does that inform the feature priorities and user experience? Can you describe how Querybook is architected? How have the goals and design changed or evolved since you first began working on it? What were some of the assumptions or design choices that you had to unwind in the process of open sourcing it? What is the workflow for someone building a DataDoc with Querybook? What is the experience of working as a collaborator on an analysis? How do you handle lifecycle management of query results? What are your thoughts on the potential for extending Querybook beyond SQL-oriented analysis and integrating something like Jupyter kernels? What are the most interesting, innovative, or unexpected ways that you have seen Querybook used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on Querybook? When is Querybook the wrong choice? What do you have planned for the future of Querybook? Contact Info Justin LinkedIn Website Charlie czgu on GitHub Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com) with your story. To help other people find the show please leave a review on iTunes and tell your friends and co-workers Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat Links Querybook Announcing Querybook as Open Source Pinterest University of Waterloo Superset Podcast Episode Podcast.__init__ Episode Sequel Pro Presto Trino Podcast Episode Flask uWSGI Podcast.__init__ Episode Celery Redis SocketIO Elasticsearch Podcast Episode Amundsen Podcast Episode Apache Atlas DataHub Podcast Episode Okta LDAP (Lightweight Directory Access Protocol) Grand Rounds The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Support Data Engineering Podcast
undefined
Jun 2, 2021 • 51min

Making Data Pipelines Self-Serve For Everyone With Shipyard

Summary Every part of the business relies on data, yet only a small team has the context and expertise to build and maintain workflows and data pipelines to transform, clean, and integrate it. In order for the true value of your data to be realized without burning out your engineers you need a way for everyone to get access to the information they care about. To help make that a more tractable problem Blake Burch co-founded Shipyard. In this episode he explains the utility of a low code solution that lets non engineers create their own self-serve pipelines, how the Shipyard platform is designed to make that possible, and how it allows engineers to create reusable tasks to satisfy the specific needs of the business. This is an interesting conversation about how to make data more accessible and more useful by improving the user experience of the tools that we create. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! RudderStack’s smart customer data pipeline is warehouse-first. It builds your customer data warehouse and your identity graph on your data warehouse, with support for Snowflake, Google BigQuery, Amazon Redshift, and more. Their SDKs and plugins make event streaming easy, and their integrations with cloud applications like Salesforce and ZenDesk help you go beyond event streaming. With RudderStack you can use all of your customer data to answer more difficult questions and then send those insights to your whole customer data stack. Sign up free at dataengineeringpodcast.com/rudder today. When it comes to serving data for AI and ML projects, do you feel like you have to rebuild the plane while you’re flying it across the ocean? Molecula is an enterprise feature store that operationalizes advanced analytics and AI in a format designed for massive machine-scale projects without having to manage endless one-off information requests. With Molecula, data engineers manage one single feature store that serves the entire organization with millisecond query performance whether in the cloud or at your data center. And since it is implemented as an overlay, Molecula doesn’t disrupt legacy systems. High-growth startups use Molecula’s feature store because of its unprecedented speed, cost savings, and simplified access to all enterprise data. From feature extraction to model training to production, the Molecula feature store provides continuously updated feature access, reuse, and sharing without the need to pre-process data. If you need to deliver unprecedented speed, cost savings, and simplified access to large scale, real-time data, visit dataengineeringpodcast.com/molecula and request a demo. Mention that you’re a Data Engineering Podcast listener, and they’ll send you a free t-shirt. Your host is Tobias Macey and today I’m interviewing Blake Burch about Shipyard, and his mission to create the easiest way for data teams to launch, monitor, and share resilient pipelines with less engineering Interview Introduction How did you get involved in the area of data management? Can you describe what you are building at Shipyard and the story behind it? What are the main goals that you have for Shipyard? How does it compare to other data orchestration frameworks in the market? Who are the target users of Shipyard and how does that influence the features and design of the product? What are your thoughts on the role of data orchestration in the business? How is the Shipyard platform implemented? What was your process for identifying the core requirements of the platform? How have the design and goals of the system evolved since you first began working on it? Can you describe the workflow of building a data workflow with Shipyard? How do you manage the dependency chain across tasks in the execution graph? (e.g. task-based, data assets, etc.) How do you handle testing and data quality management in your workflows? What is the interface for creating custom task definitions? How do you address dependencies and sandboxing for custom code? What is your approach to developing templates? What are the operational challenges that you have had to address to manage scaling and multi-tenancy in your platform? What are the most interesting, innovative, or unexpected ways that you have seen Shipyard used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on Shipyard? When is Shipyard the wrong choice? What do you have planned for the future of Shipyard? Contact Info LinkedIn @BlakeBurch_ on Twitter Website blakeburch on GitHub Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com) with your story. To help other people find the show please leave a review on iTunes and tell your friends and co-workers Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat Links Shipyard Zapier Airtable BigQuery Snowflake Podcast Episode Docker ECS == Elastic Container Service Great Expectations Podcast Episode Monte Carlo Podcast Episode Soda Data Podcast Episode The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Support Data Engineering Podcast
undefined
May 28, 2021 • 53min

Paving The Road For Fast Analytics On Distributed Clouds With The Yellowbrick Data Warehouse

Summary The data warehouse has become the focal point of the modern data platform. With increased usage of data across businesses, and a diversity of locations and environments where data needs to be managed, the warehouse engine needs to be fast and easy to manage. Yellowbrick is a data warehouse platform that was built from the ground up for speed, and can work across clouds and all the way to the edge. In this episode CTO Mark Cusack explains how the engine is architected, the benefits that speed and predictable pricing has for the organization, and how you can simplify your platform by putting the warehouse close to the data, instead of the other way around. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! Firebolt is the fastest cloud data warehouse. Visit dataengineeringpodcast.com/firebolt to get started. The first 25 visitors will receive a Firebolt t-shirt. Atlan is a collaborative workspace for data-driven teams, like Github for engineering or Figma for design teams. By acting as a virtual hub for data assets ranging from tables and dashboards to SQL snippets & code, Atlan enables teams to create a single source of truth for all their data assets, and collaborate across the modern data stack through deep integrations with tools like Snowflake, Slack, Looker and more. Go to dataengineeringpodcast.com/atlan today and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $3000 on an annual subscription Your host is Tobias Macey and today I’m interviewing Mark Cusack about Yellowbrick, a data warehouse designed for distributed clouds Interview Introduction How did you get involved in the area of data management? Can you start by describing what Yellowbrick is and some of the story behind it? What does the term "distributed cloud" signify and what challenges are associated with it? How would you characterize Yellowbrick’s position in the database/DWH market? How is Yellowbrick architected? How have the goals and design of the platform changed or evolved over time? How does Yellowbrick maintain visibility across the different data locations that it is responsible for? What capabilities does it offer for being able to join across the disparate "clouds"? What are some data modeling strategies that users should consider when designing their deployment of Yellowbrick? What are some of the capabilities of Yellowbrick that you find most useful or technically interesting? For someone who is adopting Yellowbrick, what is the process for getting it integrated into their data systems? What are the most underutilized, overlooked, or misunderstood features of Yellowbrick? What are the most interesting, innovative, or unexpected ways that you have seen Yellowbrick used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on and with Yellowbrick? When is Yellowbrick the wrong choice? What do you have planned for the future of the product? Contact Info LinkedIn @markcusack on Twitter Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Links Yellowbrick Teradata Rainstor Distributed Cloud Hybrid Cloud SwimOS Podcast Episode Kafka Pulsar Podcast Episode Snowflake Podcast Episode AWS Redshift MPP == Massively Parallel Processing Presto Trino Podcast Episode L3 Cache NVMe Reactive Programming Coroutine Star Schema Denodo Lexis Nexis Vertica Netezza Grenplum PostgreSQL Podcast Episode Clickhouse Podcast Episode Erasure Coding The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Support Data Engineering Podcast
undefined
May 25, 2021 • 47min

Easily Build Advanced Similarity Search With The Pinecone Vector Database

Summary Machine learning models use vectors as the natural mechanism for representing their internal state. The problem is that in order for the models to integrate with external systems their internal state has to be translated into a lower dimension. To eliminate this impedance mismatch Edo Liberty founded Pinecone to build database that works natively with vectors. In this episode he explains how this technology will allow teams to accelerate the speed of innovation, how vectors make it possible to build more advanced search functionality, and how Pinecone is architected. This is an interesting conversation about how reconsidering the architecture of your systems can unlock impressive new capabilities. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! RudderStack’s smart customer data pipeline is warehouse-first. It builds your customer data warehouse and your identity graph on your data warehouse, with support for Snowflake, Google BigQuery, Amazon Redshift, and more. Their SDKs and plugins make event streaming easy, and their integrations with cloud applications like Salesforce and ZenDesk help you go beyond event streaming. With RudderStack you can use all of your customer data to answer more difficult questions and then send those insights to your whole customer data stack. Sign up free at dataengineeringpodcast.com/rudder today. When it comes to serving data for AI and ML projects, do you feel like you have to rebuild the plane while you’re flying it across the ocean? Molecula is an enterprise feature store that operationalizes advanced analytics and AI in a format designed for massive machine-scale projects without having to manage endless one-off information requests. With Molecula, data engineers manage one single feature store that serves the entire organization with millisecond query performance whether in the cloud or at your data center. And since it is implemented as an overlay, Molecula doesn’t disrupt legacy systems. High-growth startups use Molecula’s feature store because of its unprecedented speed, cost savings, and simplified access to all enterprise data. From feature extraction to model training to production, the Molecula feature store provides continuously updated feature access, reuse, and sharing without the need to pre-process data. If you need to deliver unprecedented speed, cost savings, and simplified access to large scale, real-time data, visit dataengineeringpodcast.com/molecula and request a demo. Mention that you’re a Data Engineering Podcast listener, and they’ll send you a free t-shirt. Your host is Tobias Macey and today I’m interviewing Edo Liberty about Pinecone, a vector database for powering machine learning and similarity search Interview Introduction How did you get involved in the area of data management? Can you start by describing what Pinecone is and the story behind it? What are some of the contexts where someone would want to perform a similarity search? What are the considerations that someone should be aware of when deciding between Pinecone and Solr/Lucene for a search oriented use case? What are some of the other use cases that Pinecone enables? In the absence of Pinecone, what kinds of systems and solutions are people building to address those use cases? Where does Pinecone sit in the lifecycle of data and how does it integrate with the broader data management ecosystem? What are some of the systems, tools, or frameworks that Pinecone might replace? How is Pinecone implemented? How has the architecture evolved since you first began working on it? What are the most complex or difficult aspects of building Pinecone? Who is your target user and how does that inform the user experience design and product development priorities? For someone who wants to start using Pinecone, what is involved in populating it with data building an analysis or service with it? What are some of the data modeling considerations when building a set of vectors in Pinecone? What are some of the most interesting, unexpected, or innovative ways that you have seen Pinecone used? What are the most interesting, unexpected, or challenging lessons that you have learned while building and growing the Pinecone technology and business? When is Pinecone the wrong choice? What do you have planned for the future of Pinecone? Contact Info Website LinkedIn Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com) with your story. To help other people find the show please leave a review on iTunes and tell your friends and co-workers Join the community in the new Zulip chat workspace at dataengineeringpodcast.com/chat Links Pinecone Theoretical Physics High Dimensional Geometry AWS Sagemaker Visual Cortex Temporal Lobe Inverted Index Elasticsearch Podcast Episode Solr Lucene NMSLib Johnson-Lindenstrauss Lemma The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Support Data Engineering Podcast
undefined
May 21, 2021 • 56min

A Holistic Approach To Data Governance Through Self Reflection At Collibra

Summary Data governance is a phrase that means many different things to many different people. This is because it is actually a concept that encompasses the entire lifecycle of data, across all of the people in an organization who interact with it. Stijn Christiaens co-founded Collibra with the goal of addressing the wide variety of technological aspects that are necessary to realize such an important and expansive process. In this episode he shares his thoughts on the balance between human and technological processes that are necessary for a well-managed data governance strategy, how Collibra is designed to aid in that endeavor, and his experiences using the platform that his company is building to help power the company. This is an excellent conversation that spans the engineering and philosophical complexities of an important and ever-present aspect of working with data. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! RudderStack’s smart customer data pipeline is warehouse-first. It builds your customer data warehouse and your identity graph on your data warehouse, with support for Snowflake, Google BigQuery, Amazon Redshift, and more. Their SDKs and plugins make event streaming easy, and their integrations with cloud applications like Salesforce and ZenDesk help you go beyond event streaming. With RudderStack you can use all of your customer data to answer more difficult questions and then send those insights to your whole customer data stack. Sign up free at dataengineeringpodcast.com/rudder today. Your host is Tobias Macey and today I’m interviewing Stijn Christiaens about data governance in the enterprise and how Collibra applies the lessons learned from their customers to their own business Interview Introduction How did you get involved in the area of data management? Can you start by describing what you are building at Collibra and the story behind the company? Wat does "data governance" mean to you, and how does that definition inform your work at Collibra? How would you characterize the current landscape of "data governance" offerings and Collibra’s position within it? What are the elements of governance that are often ignored in small/medium businesses but which are essential for the enterprise? (e.g. data stewards, business glossaries, etc.) One of the most important tasks as a data professional is to establish and maintain trust in the information you are curating. What are the biggest obstacles to overcome in that mission? What are some of the data problems that you will only find at large or complex organizations? How does Collibra help to tame that complexity? Who are the end users of Collibra within an organization? Can you talk through the workflow and various interactions that your customers have as it relates to the overall flow of data through an organization? Can you describe how the Collibra platform is implemented? How has the scope and design of the system evolved since you first began working on it? You are currently leading a team that uses Collibra to manage the operations of the business. What are some of the most notable surprises that you have learned from being your own customer? What are some of the weak points that you have been able to identify and resolve? How have you been able to use those lessons to help your customers? What are the activities that are resistant to automation? How do you design the system to allow for a smooth handoff between mechanistic and humanistic processes? What are some of the most interesting, innovative, or unexpected ways that you have seen Collibra used? What are the most interesting, unexpected, or challenging lessons that you have learned while building and growing Collibra, and running the internal data office? When is Collibra the wrong choice? What do you have planned for the future of the platform? Contact Info LinkedIn @stichris on Twitter Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Links Collibra Collibra Data Office Electrical Engineering Resistor Color Codes STAR Lab (semantics, technology, and research) Microsoft Azure Data Governance GDPR Chief Data Officer Dunbar’s Number Business Glossary Data Steward ERP == Enterprise Resource Planning CRM == Customer Relationship Management Data Ownership Data Mesh Podcast Episode The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Support Data Engineering Podcast
undefined
May 18, 2021 • 58min

Unlocking The Power of Data Lineage In Your Platform with OpenLineage

Julien Le Dem, a data engineer and CTO of Datakin, discusses the significance of data lineage in understanding data quality and pipeline impacts. He introduces OpenLineage, a project aimed at standardizing lineage metadata across various platforms, promoting collaboration among competing companies. Julien explains its core model and how it benefits data observability, trust, and reliability. He emphasizes the importance of community contributions and outlines the integration process, highlighting the pressing need for better tooling in pipeline observability.
undefined
5 snips
May 14, 2021 • 1h 15min

Building Your Data Warehouse On Top Of PostgreSQL

Explore the use of Postgres as a data warehouse, including its evolution, optimizations, extensibility, and innovative use cases. Learn about the challenges and misconceptions of working with Postgres and the potential of user defined functions and gaps in data management technology.
undefined
May 11, 2021 • 54min

Making Analytical APIs Fast With Tinybird

Summary Building an API for real-time data is a challenging project. Making it robust, scalable, and fast is a full time job. The team at Tinybird wants to make it easy to turn a continuous stream of data into a production ready API or data product. In this episode CEO Jorge Sancha explains how they have architected their system to handle high data throughput and fast response times, and why they have invested heavily in Clickhouse as the core of their platform. This is a great conversation about the challenges of building a maintainable business from a technical and product perspective. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! RudderStack’s smart customer data pipeline is warehouse-first. It builds your customer data warehouse and your identity graph on your data warehouse, with support for Snowflake, Google BigQuery, Amazon Redshift, and more. Their SDKs and plugins make event streaming easy, and their integrations with cloud applications like Salesforce and ZenDesk help you go beyond event streaming. With RudderStack you can use all of your customer data to answer more difficult questions and then send those insights to your whole customer data stack. Sign up free at dataengineeringpodcast.com/rudder today. Ascend.io — recognized as a 2021 Gartner Cool Vendor in Enterprise AI Operationalization and Engineering—empowers data teams to to build, scale, and operate declarative data pipelines with 95% less code and zero maintenance. Connect to any data source using Ascend’s new flex code data connectors, rapidly iterate on transformations and send data to any destination in a fraction of the time it traditionally takes—just ask companies like Harry’s, HNI, and Mayvenn. Sound exciting? Come join the team! We’re hiring data engineers, so head on over to dataengineeringpodcast.com/ascend and check out our careers page to learn more. Your host is Tobias Macey and today I’m interviewing Jorge Sancha about Tinybird, a platform to easily build analytical APIs for real-time data Interview Introduction How did you get involved in the area of data management? Can you start by describing what you are building at Tinybird and the story behind it? What are some of the types of use cases that your customers are focused on? What are the areas of complexity that come up when building analytical APIs that are often overlooked when first designing a system to operate on and expose real-time data? What are the supporting systems that are necessary and useful for operating this kind of system which contribute to the overall time and engineering cost beyond the baseline functionality? How is the Tinybird platform architected? How have the goals and implementation of Tinybird changed or evolved since you first began building it? What was your criteria for selecting the core building block of your platform, and how did that lead to your choice to build on top of Clickhouse? What are some of the sharp edges that you have run into while operating Clickhouse? What are some of the custom tools or systems that you have built to help deal with them? What are some of the performance challenges that an API built with Tinybird might run into? What are the considerations that users should be aware of to avoid introducing performance issues? How do you handle multi-tenancy in your platform? (e.g. separate clusters, in-database quotas, etc.) For users of Tinybird, can you talk through the workflow of getting it integrated into their platform and designing an API from their data? What are some of the most interesting, innovative, or unexpected ways that you have seen Tinybird used? What are the most interesting, unexpected, or challenging lessons that you have learned while building and growing Tinybird? When is Tinybird the wrong choice? What do you have planned for the future of the product and business? Contact Info @jorgesancha on Twitter LinkedIn jorgesancha on GitHub Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Links Tinybird Carto PostgreSQL Podcast Episode PostGIS Clickhouse Podcast Episode Kafka Tornado Podcast.__init__ Episode Redis Formula 1 Web Application Firewall The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Support Data Engineering Podcast
undefined
May 7, 2021 • 40min

Making Spark Cloud Native At Data Mechanics

Summary Spark is one of the most well-known frameworks for data processing, whether for batch or streaming, ETL or ML, and at any scale. Because of its popularity it has been deployed on every kind of platform you can think of. In this episode Jean-Yves Stephan shares the work that he is doing at Data Mechanics to make it sing on Kubernetes. He explains how operating in a cloud-native context simplifies some aspects of running the system while complicating others, how it simplifies the development and experimentation cycle, and how you can get a head start using their pre-built Spark container. This is a great conversation for understanding how new ways of operating systems can have broader impacts on how they are being used. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show! Firebolt is the fastest cloud data warehouse. Visit dataengineeringpodcast.com/firebolt to get started. The first 25 visitors will receive a Firebolt t-shirt. Atlan is a collaborative workspace for data-driven teams, like Github for engineering or Figma for design teams. By acting as a virtual hub for data assets ranging from tables and dashboards to SQL snippets & code, Atlan enables teams to create a single source of truth for all their data assets, and collaborate across the modern data stack through deep integrations with tools like Snowflake, Slack, Looker and more. Go to dataengineeringpodcast.com/atlan today and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $3000 on an annual subscription Your host is Tobias Macey and today I’m interviewing Jean-Yves Stephan about Data Mechanics, a cloud-native Spark platform for data engineers Interview Introduction How did you get involved in the area of data management? Can you start by giving an overview of what you are building at Data Mechanics and the story behind it? What are the operational characteristics of Spark that make it difficult to run in a cloud-optimized environment? How do you handle retries, state redistribution, etc. when instances get pre-empted during the middle of a job execution? What are some of the tactics that you have found useful when designing jobs to make them more resilient to interruptions? What are the customizations that you have had to make to Spark itself? What are some of the supporting tools that you have built to allow for running Spark in a Kubernetes environment? How is the Data Mechanics platform implemented? How have the goals and design of the platform changed or evolved since you first began working on it? How does running Spark in a container/Kubernetes environment change the ways that you and your customers think about how and where to use it? How does it impact the development workflow for data engineers and data scientists? What are some of the most interesting, unexpected, or challenging lessons that you have learned while building the Data Mechanics product? When is Spark/Data Mechanics the wrong choice? What do you have planned for the future of the platform? Contact Info LinkedIn Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Links Data Mechanics Databricks Stanford Andrew Ng Mining Massive Datasets Spark Kubernetes Spot Instances Infiniband Data Mechanics Spark Container Image Delight – Spark monitoring utility Terraform Blue/Green Deployment Spark Operator for Kubernetes JupyterHub Jupyter Enterprise Gateway The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Support Data Engineering Podcast

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app