Confluent Developer ft. Tim Berglund, Adi Polak & Viktor Gamov

Confluent
undefined
Nov 9, 2021 • 12min

Confluent Platform 7.0: New Features + Updates

Confluent Platform 7.0 has launched and includes Apache Kafka® 3.0, plus new features introduced by KIP-630: Kafka Raft Snapshot, KIP-745: Connect API to restart connector and task, and KIP-695: Further improve Kafka Streams timestamp synchronization. Reporting from Dubai, Tim Berglund (Senior Director, Developer Advocacy, Confluent) provides a summary of new features, updates, and improvements to the 7.0 release, including the ability to create a real-time bridge from on-premises environments to the cloud with Cluster Linking. Cluster Linking allows you to create a single cluster link between multiple environments from Confluent Platform to Confluent Cloud, which is available on public clouds like AWS, Google Cloud, and Microsoft Azure, removing the need for numerous point-to-point connections. Consumers reading from a topic in one environment can read from the same topic in a different environment without risks of reprocessing or missing critical messages. This provides operators the flexibility to make changes to topic replication smoothly and byte for byte without data loss. Additionally, Cluster Linking eliminates any need to deploy MirrorMaker2 for replication management while ensuring offsets are preserved. Furthermore, the release of Confluent for Kubernetes 2.2 allows you to build your own private cloud in Kafka. It completes the declarative API by adding cloud-native management of connectors, schemas, and cluster links to reduce the operational burden and manual processes so that you can instead focus on high-level declarations. Confluent for Kubernetes 2.2 also enhances elastic scaling through the Shrink API.  Following ZooKeeper’s removal in Apache Kafka 3.0, Confluent Platform 7.0 introduces KRaft in preview to make it easier to monitor and scale Kafka clusters to millions of partitions. There are also several ksqlDB enhancements in this release, including foreign-key table joins and the support of new data types—DATE and TIME— to account for time values that aren’t TIMESTAMP. This results in consistent data ingestion from the source without having to convert data types.EPISODE LINKSDownload Confluent Platform 7.0Check out the release notesRead the Confluent Platform 7.0 blog postWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get $1SEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo 🎧 Subscribe to Confluent Developer wherever you listen to podcasts. ▶️ Subscribe on YouTube, and hit the 🔔 to catch new episodes. 👍 If you enjoyed this, please leave us a rating. 🎧 Confluent also has a podcast for tech leaders: "Life Is But A Stream" hosted by our friend, Joseph Morais.
undefined
Nov 4, 2021 • 36min

Real-Time Stream Processing with Kafka Streams ft. Bill Bejeck

Kafka Streams is a native streaming library for Apache Kafka® that consumes messages from Kafka to perform operations like filtering a topic’s message and producing output back into Kafka. After working as a developer in stream processing, Bill Bejeck (Apache Kafka Committer and Integration Architect, Confluent) has found his calling in sharing knowledge and authoring his book, “Kafka Streams in Action.” As a Kafka Streams expert, Bill is also the author of the Kafka Streams 101 course on Confluent Developer, where he delves into what Kafka Streams is, how to use it, and how it works. Kafka Streams provides the abstraction over Kafka consumers and producers by minimizing administrative details like the need to code and manage frameworks required when using plain Kafka consumers and producers to process streams. Kafka Streams is declarative—you can state what you want to do, rather than how to do it. Kafka Streams leverages the KafkaConsumer protocol internally; it inherits its dynamic scaling properties and the consumer group protocol to dynamically redistribute the workload. When Kafka Streams applications are deployed separately but have the same application.id, they are logically still one application. Kafka Streams has two processing APIs, the declarative API or domain-specific language (DSL)  is a high-level language that enables you to build anything needed with a processor topology, whereas the Processor API lets you specify a processor typology node by node, providing the ultimate flexibility. To underline the differences between the two APIs, Bill says it’s almost like using the object-relational mapping framework (ORM) versus SQL. The Kafka Streams 101 course is designed to get you started with Kafka Streams and to help you learn the fundamentals of: How streams and tables work How stateless and stateful operations work How to handle time windows and out of order dataHow to deploy Kafka StreamsEPISODE LINKSKafka Streams 101 courseA Guide to Kafka Streams and Its UsesYour First Kafka Streams ApplicationKafka Streams 101 meetupWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentSEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo 🎧 Subscribe to Confluent Developer wherever you listen to podcasts. ▶️ Subscribe on YouTube, and hit the 🔔 to catch new episodes. 👍 If you enjoyed this, please leave us a rating. 🎧 Confluent also has a podcast for tech leaders: "Life Is But A Stream" hosted by our friend, Joseph Morais.
undefined
Oct 26, 2021 • 30min

Automating Infrastructure as Code with Apache Kafka and Confluent ft. Rosemary Wang

Managing infrastructure as code (IaC) instead of using manual processes makes it easy to scale systems and minimize errors. Rosemary Wang (Developer Advocate, HashiCorp, and author of “Essential Infrastructure as Code: Patterns and Practices”) is an infrastructure engineer at heart and an aspiring software developer who is passionate about teaching patterns for infrastructure as code to simplify processes for system admins and software engineers familiar with Python, provisioning tools like Terraform, and cloud service providers. The definition of infrastructure has expanded to include anything that delivers or deploys applications. Infrastructure as software or infrastructure as configuration, according to Rosemary, are ideas grouped behind infrastructure as code—the process of automating infrastructure changes in a codified manner, which also applies to DevOps practices, including version controls, continuous integration, continuous delivery, and continuous deployment. Whether you’re using a domain-specific language or a programming language, the practices used to collaborate between you, your team, and your organization are the same—create one application and scale systems.The ultimate result and benefit of infrastructure as code is automation. Many developers take advantage of managed offerings like Confluent Cloud—fully managed Kafka as a service—to remove the operational burden and configuration layer. Still, as long as complex topologies like connecting to another server on a cloud provider to external databases exist, there is great value to standardizing infrastructure practices. Rosemary shares four characteristics that every infrastructure system should have: ResilienceSelf-serviceSecurityCost reductionIn addition, Rosemary and Tim discuss updating infrastructure with blue-green deployment techniques, immutable infrastructure, and developer advocacy. EPISODE LINKS: Use PODCAST100 to get $100 of free Confluent Cloud usage (details)Use podcon19 to get 40% off “Essential Infrastructure as Code: Patterns and Practices”Watch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentSEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo 🎧 Subscribe to Confluent Developer wherever you listen to podcasts. ▶️ Subscribe on YouTube, and hit the 🔔 to catch new episodes. 👍 If you enjoyed this, please leave us a rating. 🎧 Confluent also has a podcast for tech leaders: "Life Is But A Stream" hosted by our friend, Joseph Morais.
undefined
Oct 19, 2021 • 33min

Getting Started with Spring for Apache Kafka ft. Viktor Gamov

What’s the distinction between the Spring Framework and Spring Boot? If you are building a car, the Spring Framework is the engine while Spring Boot gives you the vehicle that you ride in. With experience teaching and answering questions on how to use Spring and Apache Kafka® together, Viktor Gamov (Principal Developer Advocate, Kong) designed a free course on Confluent Developer and previews it in this episode. Not only this, but he also explains why the opinionated Spring Framework would be a good hero in Marvel. Spring is an ever-evolving framework that embraces modern, cloud-native technologies with cross-language options, such as Kotlin integration. Unlike its predecessors, the Spring Framework supports a modern version of Java and the requirements of the Twelve-Factor App manifesto for you to move an application between environments without changing the code. With that engine in place, Spring Boot introduces a microservices architecture. Spring Boot contains databases and messaging systems integrations, reducing development time and increasing overall productivity. Spring for Apache Kafka applies best practices of the Spring community to the Kafka ecosystem, including features that abstract away infrastructure code for you to focus on programming logic that is important for your application. Spring for Apache Kafka provides a wrapper around the producer and consumer to ease Kafka configuration with APIs, including KafkaTemplate, MessageListenerContainer, @KafkaListener, and TopicBuilder.The Spring Framework and Apache Kafka course will equip you with the knowledge you need in order to build event-driven microservices using Spring and Kafka on Confluent Cloud. Tim and Viktor also discuss Spring Cloud Stream as well as Spring Boot integration with Kafka Streams and more. EPISODE LINKSSpring Framework and Apache Kafka courseSpring for Apache Kafka 101Bootiful Stream Processing with Spring and KafkaLiveStreams with Viktor GamovUse kafkaa35 to get 30% off "Kafka in Action"Watch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentSEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo 🎧 Subscribe to Confluent Developer wherever you listen to podcasts. ▶️ Subscribe on YouTube, and hit the 🔔 to catch new episodes. 👍 If you enjoyed this, please leave us a rating. 🎧 Confluent also has a podcast for tech leaders: "Life Is But A Stream" hosted by our friend, Joseph Morais.
undefined
Oct 14, 2021 • 39min

Powering Event-Driven Architectures on Microsoft Azure with Confluent

When you order a pizza, what if you knew every step of the process from the moment it goes in the oven to being delivered to your doorstep? Event-Driven Architecture is a modern, data-driven approach that describes “events” (i.e., something that just happened). A real-time data infrastructure enables you to provide such event-driven data insights in real time. Israel Ekpo (Principal Cloud Solutions Architect, Microsoft Global Partner Solutions, Microsoft) and Alicia Moniz (Cloud Partner Solutions Architect, Confluent) discuss use cases on leveraging Confluent Cloud and Microsoft Azure to power real-time, event-driven architectures. As an Apache Kafka® community stalwart, Israel focuses on helping customers and independent software vendor (ISV) partners build solutions for the cloud and use open source databases and architecture solutions like Kafka, Kubernetes, Apache Flink, MySQL, and PostgreSQL on Microsoft Azure. He’s worked with retailers and those in the IoT space to help them adopt processes for inventory management with Confluent. Having a cloud-native, real-time architecture that can keep an accurate record of supply and demand is important in keeping up with the inventory and customer satisfaction. Israel has also worked with customers that use Confluent to integrate with Cosmos DB, Microsoft SQL Server, Azure Cognitive Search, and other integrations within the Azure ecosystem. Another important use case is enabling real-time data accessibility in the public sector and healthcare while ensuring data security and regulatory compliance like HIPAA. Alicia has a background in AI, and she expresses the importance of moving away from the monolithic, centralized data warehouse to a more flexible and scalable architecture like Kafka. Building a data pipeline leveraging Kafka helps ensure data security and consistency with minimized risk.The Confluent and Azure integration enables quick Kafka deployment with out-of-the-box solutions within the Kafka ecosystem. Confluent Schema Registry captures event streams with a consistent data structure, ksqlDB enables the development of real-time ETL pipelines, and Kafka Connect enables the streaming of data to multiple Azure services.EPISODE LINKSConfluent on Azure: Why You Should Add Confluent to Your Azure ToolkitIzzyAcademy Kafka on Azure Learning Series by Alicia MonizWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperSEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo 🎧 Subscribe to Confluent Developer wherever you listen to podcasts. ▶️ Subscribe on YouTube, and hit the 🔔 to catch new episodes. 👍 If you enjoyed this, please leave us a rating. 🎧 Confluent also has a podcast for tech leaders: "Life Is But A Stream" hosted by our friend, Joseph Morais.
undefined
Oct 7, 2021 • 26min

Automating DevOps for Apache Kafka and Confluent ft. Pere Urbón-Bayes

Autonomy is key in building a sustainable and motivated team, and this core principle also applies to DevOps. Building self-serve Apache Kafka® and Confluent Platform deployments require a streamlined process with unrestricted tools—a centralized processing tool that allows teams in large or mid-sized organizations to automate infrastructure changes while ensuring shared standards are met. With more than 15 years of engineering and technology consulting experience, Pere Urbón-Bayes (Senior Solution Architect, Professional Services, Confluent) built an open source solution—JulieOps—to enable a self-serve Kafka platform as a service with data governance. JulieOps is one of the first solutions available to realize self-service for Kafka and Confluent with automation. Development, operations, security teams often face hurdles when deploying Kafka. How can a user request the topics that they need for their applications? How can the operations team ensure compliance and role-based access controls? How can schemas be standardized and structured across environments? Manual processes can be cumbersome with long cycle times. Automation reduces unnecessary interactions and shortens processing time, enabling teams to be more agile and autonomous in solving problems from a localized team level. Similar to Terraform, JulieOps is declarative. It's a centralized agent that uses the GitOps philosophy, focusing on a developer-centric experience with tools that developers are already familiar with, to provide abstractions to each product personas. All changes are documented and approved within the change management process to streamline deployments with timely and effective audits, as well as ensure security and compliance across environments.  The implementation of a central software agent, such as JulieOps, helps you automate the management of topics, configuration, access controls, Confluent Schema Registry, and more within Kafka. It’s multi tenant out of the box and supports on-premises clusters and the cloud with CI/CD practices. Tim and Pere also discuss the steps necessary to build a self-service Kafka with an automatic Jenkins process that will empower development teams to be autonomous.EPISODE LINKSJulieOps on GitHubJulieOps documentationBuilding a Self-Service Kafka Platform as a Service with GitOps with Pere Urbón-BayesOpen Service Broker APIDrive | Daniel H. Pink Watch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperSEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo 🎧 Subscribe to Confluent Developer wherever you listen to podcasts. ▶️ Subscribe on YouTube, and hit the 🔔 to catch new episodes. 👍 If you enjoyed this, please leave us a rating. 🎧 Confluent also has a podcast for tech leaders: "Life Is But A Stream" hosted by our friend, Joseph Morais.
undefined
Sep 28, 2021 • 31min

Intro to Kafka Connect: Core Components and Architecture ft. Robin Moffatt

Kafka Connect is a streaming integration framework between Apache Kafka® and external systems, such as databases and cloud services. With expertise in ksqlDB and Kafka Connect, Robin Moffatt (Staff Developer Advocate, Confluent) helps and supports the developer community in understanding Kafka and its ecosystem. Recently, Robin authored a Kafka Connect 101 course that will help you understand the basic concepts of Kafka Connect, its key features, and how it works.What’s Kafka Connect, and how does it work with Kafka and brokers? Robin explains that Kafka Connect is a Kafka API that runs separately from the Kafka brokers, running on its own Java virtual machine (JVM) process known as the Kafka Connect worker. Kafka Connect is essential for streaming data from different sources into Kafka and from Kafka to various targets. With Connect, you don’t have to write programs using Java and instead specify your pipeline using configuration. Kafka Connect.As a pluggable framework, Kafka Connect has a broad set of more than 200 different connectors available on Confluent Hub, including but not limited to:NoSQL and document stores (Elasticsearch, MongoDB, and Cassandra)RDBMS (Oracle, SQL Server, DB2, PostgreSQL, and MySQL)Cloud object stores (Amazon S3, Azure Blob Storage, and Google Cloud Storage),Message queues (ActiveMQ, IBM MQ, and RabbitMQ)Robin and Tim also discuss single message transform (SMTs), as well as distributed and standalone deployment modes Kafka Connect. Tune in to learn more about Kafka Connect, and get a preview of the Kafka Connect 101 course.EPISODE LINKSKafka Connect 101 courseKafka Connect Fundamentals: What is Kafka Connect?Meetup: From Zero to Hero with Kafka ConnectConfluent Hub: Discover Kafka connectors and more12 Days of SMTsWhy Kafka Connect? ft. Robin MoffattWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usagSEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo 🎧 Subscribe to Confluent Developer wherever you listen to podcasts. ▶️ Subscribe on YouTube, and hit the 🔔 to catch new episodes. 👍 If you enjoyed this, please leave us a rating. 🎧 Confluent also has a podcast for tech leaders: "Life Is But A Stream" hosted by our friend, Joseph Morais.
undefined
Sep 23, 2021 • 30min

Designing a Cluster Rollout Management System for Apache Kafka ft. Twesha Modi

As one of the top coders of her Java coding class in high school, Twesha Modi is continuing to follow her passion for computer science as a senior at Cornell University, where she has proven to be one of the top programmers. During Twesha's summer internship at Confluent, she contributed to designing a new service to automate Apache Kafka® cluster rollout management—a process that releases the latest Kafka versions to customer’s clusters in Confluent Cloud.During Twesha’s internship, she was part of the Platform team, which designed a cluster management rollout service—capable of automating cluster rollout and generating rollout plans that streamline Kafka operations in the cloud. The pre-existing manual process worked well in scenarios involving just a couple hundred clusters, but with growth and the need to upgrade a significantly larger cluster fleet to target versions in the cloud, the process needed to be automated in order to accelerate feature releases while ensuring security. Under the mentorship of Pablo Berton (Staff Software Engineer I, Product Infrastructure, Confluent), Nikhil Bhatia (Principal Engineer I, Product Infrastructure, Confluent), and Vaibhav Desai (Staff Software Engineer I, Confluent), Twesha supported the design of the rollouts management process from scratch. Twesha’s 12-week internship helped her learn more about software architecture and the design process that goes into software as a service and beyond. Not only did she acquire new skills and knowledge, but she also met mentors who are willing to teach, share their experiences, and help her succeed along the way. Tim and Twesha also talk about the importance of asking questions during internships for the best learning experience, in addition to discussing the Vert.x, Java, Spring, and Kubernetes APIs. EPISODE LINKSMulti-Cluster Apache Kafka with Cluster Linking ft. Nikhil BhatiaWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)SEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo 🎧 Subscribe to Confluent Developer wherever you listen to podcasts. ▶️ Subscribe on YouTube, and hit the 🔔 to catch new episodes. 👍 If you enjoyed this, please leave us a rating. 🎧 Confluent also has a podcast for tech leaders: "Life Is But A Stream" hosted by our friend, Joseph Morais.
undefined
Sep 21, 2021 • 15min

Apache Kafka 3.0 - Improving KRaft and an Overview of New Features

Apache Kafka® 3.0 is out! To spotlight major enhancements in this release, Tim Berglund (Apache Kafka Developer Advocate) provides a summary of what’s new in the Kafka 3.0 release from Krakow, Poland, including API changes and improvements to the early-access Kafka Raft (KRaft). KRaft is a built-in Kafka consensus mechanism that’s replacing Apache ZooKeeper going forward. It is recommended to try out new KRaft features in a development environment, as KRaft is not advised for production yet. One of the major features in Kafka 3.0 is the efficiency for KRaft controllers and brokers to store, load, and replicate snapshots into a Kafka cluster for metadata topic partitioning. The Kafka controller is now responsible for generating a Kafka producer ID in both ZooKeeper and KRaft, easing the transition from ZooKeeper to KRaft on the Kafka 3.X version line. This update also moves us closer to the ZooKeeper-to-KRaft bridge release. Additionally, this release includes metadata improvements, exactly-once semantics, and KRaft reassignments. To enable a stronger record delivery guarantee, Kafka producers turn on by default idempotency, together with acknowledgment delivery by all the replicas. This release also comprises enhancements to Kafka Connect task restarts, Kafka Streams timestamp based synchronization and more flexible configuration options for MirrorMaker2 (MM2). The first version of MirrorMaker has been deprecated, and MirrorMaker2 will be the focus for future developments. Besides that, this release drops support for older message formats, V0 and V1, as well as initiates the removal of Java 8 and Scala 2.12 across all components in Apache Kafka. The universal Java 8 and Scala 2.12 deprecation is anticipated to complete in the future Apache Kafka 4.0 release.Apache Kafka 3.0 is a major release and step forward for the Apache Kafka project!EPISODE LINKSApache Kafka 3.0 release notes Read the blog to learn moreDownload Apache Kafka 3.0Watch the video version of this podcastJoin the Confluent Community SlackSEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo 🎧 Subscribe to Confluent Developer wherever you listen to podcasts. ▶️ Subscribe on YouTube, and hit the 🔔 to catch new episodes. 👍 If you enjoyed this, please leave us a rating. 🎧 Confluent also has a podcast for tech leaders: "Life Is But A Stream" hosted by our friend, Joseph Morais.
undefined
Sep 14, 2021 • 35min

How to Build a Strong Developer Community with Global Engagement ft. Robin Moffatt and Ale Murray

A developer community brings people with shared interests and purpose together. The fundamental elements of a community are to gather, learn, support, and create opportunities for collaboration. A developer community is also an effective and efficient instrument for exploring and solving problems together. The power of a community is its endless advantages, from knowledge sharing to support, interesting discussions, and much more. Tim Berglund invites Ale Murray (Global Community Manager, Confluent) and Robin Moffatt (Staff Developer Advocate, Confluent) on the show to discuss the art of Q&A in a global community, share tips for building a vibrant developer community, and highlight the five strategic pillars for running a successful global community:MeetupsConferencesMVP program (e.g., Confluent Community Catalysts)Community hackathonsDigital platforms Digital platforms, such as a community Slack and forum, often consist of members who are well versed on topics of interest. As a leader in the Apache Kafka® and Confluent communities, Robin expresses the importance of being respectful when asking questions and providing details to the problem at hand. A well-formulated and focused question will more likely lead to a helpful answer. Oftentimes, the cognitive process of composing the question actually helps iron out the problem and draw out a solution. This process is also known as the rubber duck debugging theory. In a global community with diverse cultures and languages, being kind and having empathy is crucial. The tone and meaning of words can sometimes get lost in translation. Using emojis can help transcend language barriers by adding another layer of tone to plain text. Ale and Robin also discuss the pros and cons of a community forum vs. a Slack group. Tune in to find out more tips and best practices on building and engaging a developer community.EPISODE LINKSUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)How to Ask Good QuestionsWhy We Launched a ForumGrowing the Event Streaming Community During COVID-19 ft. Ale MurrayMeetup HubAnnouncing the Confluent Community ForumWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, rSEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo 🎧 Subscribe to Confluent Developer wherever you listen to podcasts. ▶️ Subscribe on YouTube, and hit the 🔔 to catch new episodes. 👍 If you enjoyed this, please leave us a rating. 🎧 Confluent also has a podcast for tech leaders: "Life Is But A Stream" hosted by our friend, Joseph Morais.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app