Confluent Developer ft. Tim Berglund, Adi Polak & Viktor Gamov

Confluent
undefined
May 5, 2022 • 35min

Streaming Analytics on 50M Events Per Day with Confluent Cloud at Picnic

What are useful practices for migrating a system to Apache Kafka® and Confluent Cloud, and why use Confluent to modernize your architecture?Dima Kalashnikov (Technical Lead, Picnic Technologies) is part of a small analytics platform team at Picnic, an online-only, European grocery store that processes around 45 million customer events and five million internal events daily. An underlying goal at Picnic is to try and make decisions as data-driven as possible, so Dima's team collects events on all aspects of the company—from new stock arriving at the warehouse, to customer behavior on their websites, to statistics related to delivery trucks. Data is sent to internal systems and to a data warehouse.Picnic recently migrated from their existing solution to Confluent Cloud for several reasons:Ecosystem and community: Picnic liked the tooling present in the Kafka ecosystem. Since being a small team means they aren't able to devote extra time to building boilerplate-type code such as connectors for their data sources or functionality for extensive monitoring capabilities. Picnic also has analysts that use SQL so appreciated the processing capabilities of ksqlDB. Finally, they found that help isn't hard to locate if one gets stuck.Monitoring: They wanted better monitoring; specifically they found it challenging to measure for SLAs with their former system as they couldn't easily detect the positions of consumers in their streams.Scaling and data retention times: Picnic is growing so they needed to scale horizontally without having to worry about manual reassignment. They also hit a wall with their previous streaming solution with respect to the length of time they could save data, which is a serious issue for a company that makes data-first decisions. Cloud: Another factor of being a small team is that they don't have resources for extensive maintenance of their tooling.Dima's team was extremely careful and took their time with the migration. They ran a pilot system simultaneously with the old system, in order to make sure it could achieve their fundamental performance goals: complete stability, zero data loss, and no performance degradation. They also wanted to check it for costs.The pilot was successful and they actually have a second, IoT pilot in the works that uses Confluent Cloud and Debezium to track the robotics data emanating from their automatic fulfillment center. And it's a lot of data, Dima mentions that the robots in the center generate data sets as large as their customer events streams. EPISODE LINKSPicnic Analytics Platform: Migration from AWS Kinesis to Confluent CloudPicnic Modernizes Data Architecture with ConfluentData Engineer: Event Streaming PlatformWatch this podcast in videoKris Jenkins’ TwitterSEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo 🎧 Subscribe to Confluent Developer wherever you listen to podcasts. ▶️ Subscribe on YouTube, and hit the 🔔 to catch new episodes. 👍 If you enjoyed this, please leave us a rating. 🎧 Confluent also has a podcast for tech leaders: "Life Is But A Stream" hosted by our friend, Joseph Morais.
undefined
May 3, 2022 • 2min

Build a Data Streaming App with Apache Kafka and JS - Coding in Motion

Coding is inherently enjoyable and experimental. With the goal of bringing fun into programming, Kris Jenkins (Senior Developer Advocate, Confluent) hosts a new series of hands-on workshops—Coding in Motion, to teach you how to use Apache Kafka® and data streaming technologies for real-life use cases. In the first episode, Sound & Vision, Kris walks you through the end-to-end process of building a real-time, full-stack data streaming application from scratch using Kafka and JavaScript/TypeScript. During the workshop, you’ll learn to stream musical MIDI data into fully-managed Kafka using Confluent Cloud, then process and transform the raw data stream using ksqlDB. Finally, the enriched data streams will be pushed to a web server to display data in a 3D graphical visualization. Listen to Kris previews the first episode of Coding in Motion: Sound & Vision and join him in the workshop premiere to learn more. EPISODE LINKSCoding in Motion Workshop: Build a Streaming App for Sound & VisionWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)   SEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo 🎧 Subscribe to Confluent Developer wherever you listen to podcasts. ▶️ Subscribe on YouTube, and hit the 🔔 to catch new episodes. 👍 If you enjoyed this, please leave us a rating. 🎧 Confluent also has a podcast for tech leaders: "Life Is But A Stream" hosted by our friend, Joseph Morais.
undefined
Apr 28, 2022 • 49min

Optimizing Apache Kafka's Internals with Its Co-Creator Jun Rao

You already know Apache Kafka® is a distributed event streaming system for setting your data in motion, but how does its internal architecture work? No one can explain Kafka’s internal architecture better than Jun Rao, one of its original creators and Co-Founder of Confluent. Jun has an in-depth understanding of Kafka that few others can claim—and he shares that with us in this episode, and in his new Kafka Internals course on Confluent Developer. One of Jun's goals in publishing the Kafka Internals course was to cover the evolution of Kafka since its initial launch. In line with that goal, he discusses the history of Kafka development, including the original thinking behind some of its design decisions, as well as how its features have been improved to better meet its key goals of durability, scalability, and real-time data. With respect to its initial design, Jun relates how Kafka was conceived from the ground up as a distributed system, with compute and storage always maintained as separate entities, so that they could scale independently. Additionally, he shares that Kafka was deliberately made for high throughput since many of the popular messaging systems at the time of its invention were single node, but his team needed to process large volumes of non-transactional data, such as application metrics, various logs, click streams, and IoT information.As regards the evolution of its features, in addition to others, Jun explains these two topics at great length:Consumer rebalancing protocol: The original "stop the world" approach to Kafka's consumer rebalancing—although revolutionary at the time of its launch, was eventually improved upon to take a more incremental approach.Cluster metadata: Moving from the external ZooKeeper to the built-in KRaft protocol allows for better scaling by a factor of ten. according to Jun, and it also means you only need to worry about running a single binary.The Kafka Internals course consists of eleven concise modules, each dense with detail—covering Kafka fundamentals in technical depth. The course also pairs with four hands-on exercise modules led by Senior Developer Advocate Danica Fine. EPISODE LINKSKafka Internals courseHow Apache Kafka Works: An Introduction to Kafka’s InternalsCoding in Motion Workshop: Build a Streaming AppWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorialSEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo 🎧 Subscribe to Confluent Developer wherever you listen to podcasts. ▶️ Subscribe on YouTube, and hit the 🔔 to catch new episodes. 👍 If you enjoyed this, please leave us a rating. 🎧 Confluent also has a podcast for tech leaders: "Life Is But A Stream" hosted by our friend, Joseph Morais.
undefined
Apr 21, 2022 • 51min

Using Event-Driven Design with Apache Kafka Streaming Applications ft. Bobby Calderwood

What is event modeling and how does it differ from standard data modeling?In this episode of Streaming Audio, Bobby Calderwood, founder of Evident Systems and creator of oNote observes that at the dawn of the computer age, due to the fact that memory and computing power were expensive, people began to move away from time-and-narrative-oriented record-keeping systems (in the manner of a ship's log or a financial ledger) to systems based on aggregation. Such data-model systems, still dominant today, only retain the current state generated from their inputs, with the inputs themselves going lost. A converse approach to the reductive data-model system is the event-model system, which is enabled by tools like Apache Kafka®, and which effectively saves every bit of activity that the system generates. The event model actually marks a return, in a sense, to the earlier, narrative-like recording methods.To further illustrate, Bobby uses a chess example to show the distinction between the data model and the event model. In a chess context, the event modeling system would retain each move in the game from beginning to end, such that any moment in the game could be derived by replaying the sequence of moves. Conversely, chess based on the data model would save only the current state of the game, destructively mutating the data structure to reflect it. The event model maintains an immutable log of all of a system's activity, which means that teams downstream from the transactions team have access to all of the system's data, not just the end transactions, and they can analyze the data as they wish in order to make their own conclusions. Thus there can be several read models over the same body of events. Bobby has found that non-programming stakeholding teams tend to intuitively comprehend the event model better than other data paradigms, given its natural narrative form.    Transitioning from the data model to the event model, however, can be challenging. Bobby’s oNote—event modeling platform aims to help by providing a digital canvas that allows a system to be visually redesigned according to the event model. oNote generates Avro schema based on its models, and also uses Avro to generate runtime code.EPISODE LINKSEvent Sourcing and Event Storage with Apache KafkaoNoteEvent ModelingToward a Functional Programming Analogy for MicroservicesEvent-Driven Architecture - Common Mistakes and Valuable Lessons ft. Simon AuburyWatch the video version of this podcastCoding in Motion Workshop: Build a StreamSEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo 🎧 Subscribe to Confluent Developer wherever you listen to podcasts. ▶️ Subscribe on YouTube, and hit the 🔔 to catch new episodes. 👍 If you enjoyed this, please leave us a rating. 🎧 Confluent also has a podcast for tech leaders: "Life Is But A Stream" hosted by our friend, Joseph Morais.
undefined
Apr 13, 2022 • 38min

Monitoring Extreme-Scale Apache Kafka Using eBPF at New Relic

New Relic runs one of the larger Apache Kafka® installations in the world, ingesting circa 125 petabytes a month, or approximately three billion data points per minute. Anton Rodriguez is the architect of the system, responsible for hundreds of clusters and thousands of clients, some of them implemented in non-standard technologies. In addition to the large volume of servers, he works with many teams, which must all work together when issues arise.Monitoring New Relic's large Kafka installation is critical and of course challenging, even for a company that itself specializes in monitoring. Specific obstacles include determining when rebalances are happening, identifying particularly old consumers, measuring consumer lag, and finding a way to observe all producing and consuming applications.One way that New Relic has improved the monitoring of its architecture is by directly consuming metrics from the Linux kernel using its new eBPF technology, which lets programs run inside the kernel without changing source code or adding additional modules (the open-source tool Pixie enables access to eBPF in a Kafka context). eBPF is very low impact, so doesn’t affect services, and it allows New Relic to see what’s happening at the network level—and to take action as necessary.EPISODE LINKSMonitoring Kafka Without Instrumentation Using eBPFWhat Is eBPF and Why Does It Matter for Observability?Kafka MonitoringKafka Summit: Monitoring Kafka Without Instrumentation Using eBPFWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)   SEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo 🎧 Subscribe to Confluent Developer wherever you listen to podcasts. ▶️ Subscribe on YouTube, and hit the 🔔 to catch new episodes. 👍 If you enjoyed this, please leave us a rating. 🎧 Confluent also has a podcast for tech leaders: "Life Is But A Stream" hosted by our friend, Joseph Morais.
undefined
Apr 12, 2022 • 10min

Confluent Platform 7.1: New Features + Updates

Confluent Platform 7.1 expands upon its already innovative features, adding improvements in key areas that benefit data consistency, allow for increased speed and scale, and enhance resilience and reliability.Previously, the Confluent Platform 7.0 release introduced Cluster Linking, which enables you to bridge on-premises and cloud clusters, among other configurations. Maintaining data quality standards across multiple environments can be challenging though. To assist with this problem, CP 7.1 adds Schema Linking, which lets you share consistent schemas across your clusters—synced in real time.Confluent for Kubernetes lets you build your own private-cloud Apache Kafka® service. Now you can enhance the global resilience of your architecture by employing to multiple regions. With the new release you can also configure custom volumes attached to Confluent deployments and you can declaratively define and manage the new Schema Links. As of this release, Confluent for Kubernetes now supports the full feature set of the Confluent Platform. Tiered Storage was released in Confluent Platform 6.0, and it offers immense benefits for a cluster by allowing the offloading of older topic data out of the broker and into slower, long-term object storage. The reduced amount of local data makes maintenance, scaling out, recovery from failure, and adding brokers all much quicker. CP 7.1 adds compatibility for object storage using Nutanix, NetApp, MinIO, and Dell, integrations that have been put through rigorous performance and quality testing.Health+ was introduced in CP 6.2—offers intelligent cloud-based alerting and monitoring tools in a dashboard. New as of CP 7.1, you can choose to be alerted when anomalies in broker latency are detected, when there is an issue with your connectors linking Kafka and external systems, as well as when a ksqlDB query will interfere with a continuous, real-time processing stream. Shipping with CP 7.1 is ksqlDB 0.23, which adds support for pull queries against streams as opposed to only against tables—a milestone development that greatly helps when debugging since a subset of messages within a topic can now be inspected. ksqlDB 0.23 also supports custom schema selection, which lets you choose a specific schema ID when you create a new stream or table, rather than use the latest registered schema. A number of additional smaller enhancements are also included in the release.EPISODE LINKSDownload Confluent Platform 7.1Check out the release notesRead the Confluent Platform 7.1 blog postWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resoSEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo 🎧 Subscribe to Confluent Developer wherever you listen to podcasts. ▶️ Subscribe on YouTube, and hit the 🔔 to catch new episodes. 👍 If you enjoyed this, please leave us a rating. 🎧 Confluent also has a podcast for tech leaders: "Life Is But A Stream" hosted by our friend, Joseph Morais.
undefined
Apr 7, 2022 • 1h 11min

Scaling an Apache Kafka Based Architecture at Therapie Clinic

Scaling Apache Kafka® can be tricky, let alone scaling a team. When he was first hired, Domenico Fioravanti of Therapie Clinic was given the challenging task of assembling a sizable tech team from scratch, while simultaneously building a scalable and decoupled architecture from the ground up. In addition, he wanted to deliver value to the company from day one. One way that Domenico ultimately accomplished these goals was by focusing on managed solutions in order to avoid large investments in engineering know-how. Another way was to deliver quickly to production by using the existing knowledge of his team.Domenico's biggest initial priority was to make a real-time reporting dashboard that collated data generated by third-party systems, such as call centers and front-of-house software solutions that managed bookings and transactions. (Before Domenico's arrival, all reporting had been done by aggregating data from different sources through an expensive, manual, error-prone, and slow process—which tended to result in late and incomplete insights.)Establishing an initial stack with AWS and a BI/analytics tool only took a month and required minimal DevOps resources, but Domenico's team ended up wanting to leverage their efforts to free up third-party data for more than just the reporting/data insights use case.So they began considering Apache Kafka® as a central repository for their data. For Kafka itself, they investigated Amazon MSK vs. Confluent, carefully weighing setup and time costs, maintenance costs, limitations, security, availability, risks, migration costs, Kafka updates frequency, observability, and errors and troubleshooting needs.Domenico's team settled on Confluent Cloud and built the following stack:AWS AppSync, a managed GraphQL layer to interact with and abstract third-party APIs (data sources)AWS Lambdas for extracting data and producing to Kafka topicsKafka topics for the raw as well as transformed dataKafka Streams for data transformationKafka Redshift sink connector for loading data​​AWS Redshift as the destination cloud data warehouse Looker for business intelligence and big data analytics This stack allowed the company's data to be consumed by multiple teams in a scalable way. Eventually, DynamoDB was added and by the end of a year, along with a scalable architecture, Domenico had successfully grown his staff to 45 members on six teams.EPISODE LINKSConfluent’s Data Streaming Platform Can Save Over $2.5M vs. Self-Managing Apache KafkaAccelerate Your Cloud Data Warehouse Migration and Modernization with ConfluentWatch the video version of this podcastKris Jenkins' TwitterSSEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo 🎧 Subscribe to Confluent Developer wherever you listen to podcasts. ▶️ Subscribe on YouTube, and hit the 🔔 to catch new episodes. 👍 If you enjoyed this, please leave us a rating. 🎧 Confluent also has a podcast for tech leaders: "Life Is But A Stream" hosted by our friend, Joseph Morais.
undefined
Mar 29, 2022 • 23min

Bridging Frontend and Backend with GraphQL and Apache Kafka ft. Gerard Klijs

What is GraphQL? And how can you combine GraphQL with Apache Kafka® to query data in real time?With over 10 years of experience as a backend engineer, Gerard Klijs is a Confluent Community Catalyst, a contributor to several GraphQL libraries, and also a creator and maintainer of a Rust library to use Confluent Schema Registry with Java client. In this episode, he explains why you want to use Kafka with GraphQL and how they work together to bridge the gap between backend and frontend to make data more easily accessible in the frontend.  As an alternative to REST, GraphQL is an open source programming language developed by Meta, which lets you pull data from multiple data sources via a single API call. GraphQL lets you migrate and deprecate data easily. For example, if you have a `name` field, which you later decided to replace by `firstName` and `lastName`, you can group the field names together and monitor the server for query requests. If there are no additional query requests for the deprecated field, then it can be removed from the server.Usually, GraphQL is used in the frontend with a server implemented in Node.js, while Kafka is often used as an integration layer between backend components. When it comes to connecting Kafka with GraphQL, the use cases might not seem as vast at first glance, but Gerard thinks that it is due to unfamiliarity and misconceptions on how the two can work together. For example, some may think Kafka is merely a message bus and GraphQL is for graph databases.Gerard also talks about the backend for frontend (BFF) pattern as well as tips on working with GraphQL. EPISODE LINKSGetting Started with GraphQL and Apache KafkaKafka and GraphQL: Misconceptions and ConnectionsGerard Klijs GithubWatch the video version of this podcastKris Jenkins TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)  SEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo 🎧 Subscribe to Confluent Developer wherever you listen to podcasts. ▶️ Subscribe on YouTube, and hit the 🔔 to catch new episodes. 👍 If you enjoyed this, please leave us a rating. 🎧 Confluent also has a podcast for tech leaders: "Life Is But A Stream" hosted by our friend, Joseph Morais.
undefined
Mar 22, 2022 • 43min

Building Real-Time Data Governance at Scale with Apache Kafka ft. Tushar Thole

Data availability, usability, integrity, and security are words that we sometimes hear a lot. But what do they actually look like when put into practice? That’s where data governance comes in. This becomes especially tricky when working with real-time data architectures.Tushar Thole (Senior Manager, Engineering, Trust & Security, Confluent) focuses on delivering features for software-defined storage, software-defined networking (SD-WAN), security, and cloud-native domains. In this episode, he shares the importance of real-time data governance and the product portfolio—Stream Governance, which his team has been building to fostering the collaboration and knowledge sharing necessary to become an event-centric business while remaining compliant within an ever-evolving landscape of data regulations. With the increase of data volume, variety, and velocity, data governance is mandatory for trustworthy, usable, accurate, and accessible data across organizations, especially with distributed data in motion. When it comes to choosing a tool to govern real-time distributed data, there is often a paradox of choice. Some tools are built for handling data at rest, while open source alternatives lack features and are not managed services that can be integrated with the Apache Kafka® ecosystem natively. To solve governance use cases by delivering high-quality data assets, Tushar and his team have been taking Confluent Schema Registry, considered the de facto metadata management standard for the ecosystem, to the next level. This approach to governance allows organizations to scale Kafka operations for real-time observability with security and quality. The fully managed, cloud-native Stream Governance framework is based on three key workflows: Stream catalog: Search and discover data in a self-service fashionStream lineage: Understand the complex data relationships with interactive, end-to-end maps of event streams Stream quality: Deliver trusted, high-quality event streams to the organization Tushar also shares use cases around data governance and sheds light on the Stream Governance roadmap. EPISODE LINKSStream Governance – How it WorksData Mess to Data Mesh | Jay KrepsDemo: Stream GovernanceData Governance for Real Time DataWatch the video version of this podcastKris Jenkins TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveSEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo 🎧 Subscribe to Confluent Developer wherever you listen to podcasts. ▶️ Subscribe on YouTube, and hit the 🔔 to catch new episodes. 👍 If you enjoyed this, please leave us a rating. 🎧 Confluent also has a podcast for tech leaders: "Life Is But A Stream" hosted by our friend, Joseph Morais.
undefined
Mar 15, 2022 • 42min

Handling 2 Million Apache Kafka Messages Per Second at Honeycomb

How many messages can Apache Kafka® process per second? At Honeycomb, it's easily over one million messages.  In this episode,  get a taste of how Honeycomb uses Kafka on massive scale. Liz Fong-Jones (Principal Developer Advocate, Honeycomb) explains how Honeycomb manages Kafka-based telemetry ingestion pipelines and scales Kafka clusters. And what is Honeycomb? Honeycomb is an observability platform that helps you visualize, analyze, and improve cloud application quality and performance. Their data volume has grown by a factor of 10 throughout the pandemic, while the total cost of ownership has only gone up by 20%. But how, you ask? As a developer advocate for site reliability engineering (SRE) and observability, Liz works alongside the platform engineering team on optimizing infrastructure for reliability and cost. Two years ago, the team was facing the prospect of growing from 20 Kafka brokers to 200 Kafka brokers as data volume increased. The challenge was to scale and shuffle data between the number of brokers while maintaining cost efficiency.The Honeycomb engineering team has experimented with using sc1 or st1 EBS hard disks to store the majority of longer-term archives and keep only the latest hours of data on NVMe instance storage. However, this approach to cost reduction was not ideal, which resulted in needing to keep data that is older than 24 hours on SSD. The team began to explore and adopt Zstandard compression to decrease bandwidth and disk size; however, the clusters were still struggling to keep up. When Confluent Platform 6.0 rolled out Tiered Storage, the team saw it as a feature to help them break away from being storage bound. Before bringing the feature into production, the team did a proof of concept, which helped them gain confidence as they watched Kafka tolerate broker death and reduce latencies in fetching historical data. Tiered Storage now shrinks their clusters significantly so that they can hold on to local NVMe SSD and the tiered data is only stored once on Amazon S3, rather than consuming SSD on all replicas. In combination with the AWS Im4gn instance, Tiered Storage allows the team to scale for long-term growth. Honeycomb also saved 87% on the cost per megabyte of Kafka throughput by optimizing their Kafka clusters.EPISODE LINKSTiered StorageIntroducing Confluent Platform 6.0Scaling Kafka at HoneycombWatch the video version of this podcastKris Jenkins TwitterStreaming Audio Playlist Join the ConSEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo 🎧 Subscribe to Confluent Developer wherever you listen to podcasts. ▶️ Subscribe on YouTube, and hit the 🔔 to catch new episodes. 👍 If you enjoyed this, please leave us a rating. 🎧 Confluent also has a podcast for tech leaders: "Life Is But A Stream" hosted by our friend, Joseph Morais.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app