Confluent Developer ft. Tim Berglund, Adi Polak & Viktor Gamov

Confluent
undefined
Sep 9, 2021 • 35min

What Is Data Mesh, and How Does it Work? ft. Zhamak Dehghani

The data mesh architectural paradigm shift is all about moving analytical data away from a monolithic data warehouse or data lake into a distributed architecture—allowing data to be shared for analytical purposes in real time, right at the point of origin. The idea of data mesh was introduced by Zhamak Dehghani (Director of Emerging Technologies, Thoughtworks) in 2019.  Here, she provides an introduction to data mesh and the fundamental problems that it’s trying to solve. Zhamak describes that the complexity and ambition to use data have grown in today’s industry. But what is data mesh? For over half a century, we’ve been trying to democratize data to deliver value and provide better analytic insights. With the ever-growing number of distributed domain data sets, diverse information arrives in increasing volumes and with high velocity. To remove the friction and serve the requirement for data to be consumed by operational needs in various use cases, the best way is to mesh the data. This means connecting data through a peer-to-peer fashion and liberating data for analytics, machine learning, serving up data-intensive applications across the organization, and more. Data mesh tackles the deficiency of the traditional, centralized data lake and data warehouse platform architecture. The data mesh paradigm is founded on four principles: Domain-oriented ownershipData as a productData available everywhere in a self-serve data infrastructureData standardization governanceA decentralized, agnostic data structure enables you to synthesize data and innovate. The starting point is embracing the ideology that data can be anywhere. Source-aligned data should serve as a product available for people across the organization to combine, explore, and drive actionable insights. Zhamak and Tim also discuss the next steps we need to take in order to bring data mesh to life at the industry level.To learn more about the topic, you can visit the all-new Confluent Developer course: Data Mesh 101. Confluent Developer is a single destination with resources to begin your Kafka journey.  EPISODE LINKSZhamak Dehghani: How to Build the Data Mesh FoundationData Mesh 101Practical Data Mesh: Building Decentralized Data Architectures with Event StreamsSaxo Bank’s Best Practices for a Distributed Domain-Driven Architecture Founded on the Data MeshSEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo 🎧 Subscribe to Confluent Developer wherever you listen to podcasts. ▶️ Subscribe on YouTube, and hit the 🔔 to catch new episodes. 👍 If you enjoyed this, please leave us a rating. 🎧 Confluent also has a podcast for tech leaders: "Life Is But A Stream" hosted by our friend, Joseph Morais.
undefined
Aug 31, 2021 • 31min

Multi-Cluster Apache Kafka with Cluster Linking ft. Nikhil Bhatia

Note: This episode was recorded when Cluster Linking was in preview mode. It’s now generally available as part of the Confluent Q3 ‘21 release on August 17, 2021. Infrastructure needs to react in real time to support globally distributed events, such as cloud migration, IoT, edge data collection, and disaster recovery. To provide a seamless yet cloud-native, cross-cluster topic replication experience, Nikhil Bhatia (Principal Engineer I, Product Infrastructure, Confluent) and the team engineered a solution called Cluster Linking. Available on Confluent Cloud, Cluster Linking is an API that enables Apache Kafka® to work across multi-datacenters, making it possible to design globally available distributed systems. As industries adopt multi-cloud usage and depart from on-premises and single cluster operations, we need to rethink how clusters operate across regions in the cloud. Cluster Linking as an inter-cluster replication layer into Confluent Server, allowing you to connect clusters together and replicate topics asynchronously without the need for Connect. Cluster Linking requires zero external components when moving messages from one cluster to another. It replicates data into its destination by partition and byte for byte, preserving offsets from the source cluster. Different from Confluent Replicator and MirrorMaker2, Cluster Linking simplifies failover in high availability and disaster recovery scenarios, improving overall efficiency by avoiding recompression. As a great cost-effective alternative to Multi-Region Cluster, Cluster Linking reduces traffic between data centers and enables inter-cluster replication without the need to deploy and manage a separate Connect cluster. With low recovery point objective (RPO) and recovery time objective (RTO), Cluster Linking enables scenarios such as: Migration to cloud: Remove the complexity layer of self-run datacenters with fully managed cloud services. Global reads: Enable users to connect to Kafka from around the globe and consume data locally. Empowering better performance and improving cost effectiveness. Disaster recovery: Prepare your system for fault tolerance, from datacenter, regional, or cloud-level disasters, ensuring zero data loss and high availability. Find out more about Cluster Linking architecture and set your data in motion with global Kafka.EPISODE LINKSAnnouncing the Confluent Q3 '21 ReleaseIntroducing Cluster Linking in Confluent Platform 6.0What is Cluster Linking? Resurrecting In-Sync Replicas with Automatic Observer Promotion ft. Anna McDonaldSEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo 🎧 Subscribe to Confluent Developer wherever you listen to podcasts. ▶️ Subscribe on YouTube, and hit the 🔔 to catch new episodes. 👍 If you enjoyed this, please leave us a rating. 🎧 Confluent also has a podcast for tech leaders: "Life Is But A Stream" hosted by our friend, Joseph Morais.
undefined
Aug 26, 2021 • 29min

Using Apache Kafka and ksqlDB for Data Replication at Bolt

What does a ride-hailing app that offers micromobility and food delivery services have to do with data in motion? In this episode, Ruslan Gibaiev (Data Architect, Bolt) shares about Bolt’s road to adopting Apache Kafka® and ksqlDB for stream processing to replicate data from transactional databases to analytical warehouses. Rome wasn't built overnight, nor was the adoption of Kafka and ksqlDB at Bolt. Initially, Bolt noticed the need for system standardization and replacing the unreliable query-based change data capture (CDC) process. As an experienced Kafka developer, Ruslan believed that Kafka is the solution for adopting change data capture as a company-wide event streaming solution. Persuading the team at Bolt to adopt and buy in was hard at first, but Ruslan made it possible. Eventually, the team replaced query-based CDC with log-based CDC from Debezium, built on top of Kafka. Shortly after the implementation, developers at Bolt began to see precise, correct, and real-time data. As Bolt continues to grow, they see the need to implement a data lake or a data warehouse for OTP system data replication and stream processing. After carefully considering several different solutions and frameworks such as ksqlDB, Apache Flink®, Apache Spark™, and Kafka Streams, ksqlDB shines most for their business requirement. Bolt adopted ksqlDB because it is native to the Kafka ecosystem, and it is a perfect fit for their use case. They found ksqlDB to be a particularly good fit for replicating all their data to a data warehouse for a number of reasons, including: Easy to deploy and manageLinearly scalableNatively integrates with Confluent Schema Registry Turn in to find out more about Bolt’s adoption journey with Kafka and ksqlDB. EPISODE LINKSInside ksqlDB Course ksqlDB 101 CourseHow Bolt Has Adopted Change Data Capture with Confluent PlatformAnalysing Changes with Debezium and Kafka StreamsNo More Silos: How to Integrate Your Databases with Apache Kafka and CDCChange Data Capture with Debezium ft. Gunnar MorlingAnnouncing ksqlDB 0.17.0Real-Time Data Replication with ksqlDBWatch the video version of this podcastSEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo 🎧 Subscribe to Confluent Developer wherever you listen to podcasts. ▶️ Subscribe on YouTube, and hit the 🔔 to catch new episodes. 👍 If you enjoyed this, please leave us a rating. 🎧 Confluent also has a podcast for tech leaders: "Life Is But A Stream" hosted by our friend, Joseph Morais.
undefined
Aug 19, 2021 • 29min

Placing Apache Kafka at the Heart of a Data Revolution at Saxo Bank

Monolithic applications present challenges for organizations like Saxo Bank, including difficulties when it comes to transitioning to cloud, data efficiency, and performing data management in a regulated environment. Graham Stirling, the head of data platforms at Saxo Bank and also a self-proclaimed recovering architect on the pathway to delivery, shares his experience over the last 2.5 years as Saxo Bank placed Apache Kafka® at the heart of their company—something they call a data revolution. Before adopting Kafka, Saxo Bank encountered scalability problems. They previously relied on a centralized data engineering team, using the database as an integration point and looking to their data warehouse as the center of the analytical universe. However, this needed to evolve. For a better data strategy, Graham turned his attention towards embracing a data mesh architecture: Create a self-serve platform that enables domain teams to publish and consume data assetsFederate ownership of domain data models and centralize oversights to allow a standard language to emerge while ensuring information efficiency Believe in the principle of data as a product to improve business decisions and processes Data mesh was first defined by Zhamak Dehghani in 2019, as a type of data platform architecture paradigm and has now become an integral part of Saxo Bank’s approach to data in motion. Using a combination of Kafka GitOps, pipelines, and metadata, Graham intended to free domain teams from having to think about the mechanics, such as connector deployment, language binding, style guide adherence, and data handling of personally identifiable information (PII). To reduce operational complexity, Graham recognized the importance of using Confluent Schema Registry as a serving layer for metadata. Saxo Bank authored schemes with Avro IDL for composability and standardization and later made a switch over to Uber’s Buf for strongly typed metadata. A further layer of metadata allows Saxo Bank to define FpML-like coding schemes to specify information classification, reference external standards, and link semantically related concepts. By embarking on the data mesh operating model, Saxo Bank scales data processing in a way that was previously unimaginable, allowing them to generate value sustainably and to be more efficient with data usage. Tune in to this episode to learn more about the following:Data meshTopic/schema as an APIData as a productKafka as a fundamental building block of data strategyEPISODE LINKSZhamak Dehghani Kafka Summit 2021 KeynoteData Mesh 101 CourseData Mesh Principles and Logical ArchitectureSaxo Bank’s Best Practices for a Distributed Domain-Driven ArchitectureSEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo 🎧 Subscribe to Confluent Developer wherever you listen to podcasts. ▶️ Subscribe on YouTube, and hit the 🔔 to catch new episodes. 👍 If you enjoyed this, please leave us a rating. 🎧 Confluent also has a podcast for tech leaders: "Life Is But A Stream" hosted by our friend, Joseph Morais.
undefined
Aug 11, 2021 • 28min

Advanced Stream Processing with ksqlDB ft. Michael Drogalis

ksqlDB makes it easy to read, write, process, and transform data on Apache Kafka®, the de facto event streaming platform. With simple SQL syntax, pre-built connectors, and materialized views, ksqlDB’s powerful stream processing capabilities enable you to quickly start processing real-time data at scale. But how does ksqlDB work? In this episode, Michael Drogalis (Principal Product Manager, Product Management, Confluent) previews an all-new Confluent Developer course: Inside ksqlDB, where he provides a full overview of ksqlDB’s internal architecture and delves into advanced ksqlDB features. When it comes to ksqlDB or Kafka Streams, there’s one principle to keep in mind: ksqlDB and Kafka Streams share a runtime. ksqlDB runs its SQL queries by dynamically writing Kafka Streams typologies. Leveraging Confluent Cloud makes it even easier to use ksqlDB.Once you are familiar with ksqlDB’s basic design, you’ll be able to troubleshoot problems and build real-time applications more effectively. The Inside ksqlDB course is designed to help you advance in ksqlDB and Kafka. Paired with hands-on exercises and ready-to-use codes, the course covers topics including: ksqlDB architectureHow stateless and stateful operations workStreaming joins Table-table joinsElastic scaling High availabilityMichael also sheds light on ksqlDB’s roadmap: Building out the query layer so that is highly scalable, making it able to execute thousands of concurrent subscriptionsMaking Confluent Cloud the best place to run ksqlDB and process streamsTune in to this episode to find out more about the Inside ksqlDB course on Confluent Developer. The all-new website provides diverse and comprehensive resources for developers looking to learn about Kafka and Confluent. You’ll find free courses, tutorials, getting started guides, quick starts for 60+ event streaming patterns, and more—all in a single destination. EPISODE LINKSInside ksqlDB Course ksqlDB 101 CourseHow ksqlDB Works: Internal Architecture and Advanced FeaturesHow Real-Time Stream Processing Safely Scales with ksqlDB, AnimatedHow Real-Time Materialized Views Work with ksqlDB, AnimatedHow Real-Time Stream Processing Works with ksqlDB, AnimatedWatch the video version of this podcastJoin the Confluent CommunitySEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo 🎧 Subscribe to Confluent Developer wherever you listen to podcasts. ▶️ Subscribe on YouTube, and hit the 🔔 to catch new episodes. 👍 If you enjoyed this, please leave us a rating. 🎧 Confluent also has a podcast for tech leaders: "Life Is But A Stream" hosted by our friend, Joseph Morais.
undefined
Aug 5, 2021 • 32min

Minimizing Software Speciation with ksqlDB and Kafka Streams ft. Mitch Seymour

Building a large, stateful Kafka Streams application that tracks the state of each outgoing email is crucial to marketing automation tools like Mailchimp. Joining us today in this episode, Mitch Seymour, staff engineer at Mailchimp, shares how ksqlDB and Kafka Streams handle the company’s largest source of streaming data.  Almost like a post office, except instead of sending physical parcels, Mailchimp sends billions of emails per day. Monitoring the state of each email can provide visibility into the core business function, and it also returns information about the health of both internal and remote message transfer agents (MTAs). Finding a way to track those MTA systems in real time is pivotal to the success of the business. Mailchimp is an early Apache Kafka® adopter that started using the technology in 2014, a time before ksqlDB, Kafka Connect, and Kafka Streams came into the picture. The stream processing applications that they were building faced many complexities and rough edges. As their use case evolved and scaled overtime at Mailchimp, a large number of applications deviated from the initial implementation and design so that different applications emerged that they had to maintain. To reduce cost, complexity, and standardize stream processing applications, adopting ksqlDB and Kafka Streams became the solution to their problems. This is what Mitch calls, “minimizing software speciation in our software.”It's the idea when applications evolved into multiple systems to respond to failure-handling strategies, increased load, and the like. Using different scaling strategies and communication protocols creates system silos and can be challenging to maintain.Replacing the existing architecture that supported point-to-point communication, the new Mailchimp architecture uses Kafka as its foundation with scalable custom functions, such as a reusable and highly functional user-defined function (UDF). The reporting capabilities have also evolved from Kafka Streams’ interactive queries into enhanced queries with Elasticsearch. Turning experiences into books, Mitch is also an author of O’Reilly’s Mastering Kafka Streams and ksqlDB and the author and illustrator of Gently Down the Stream: A Gentle Introduction to Apache Kafka. EPISODE LINKSThe Exciting Frontier of Custom ksql Functions Kafka Streams 101 CourseMastering Kafka Streams and ksqlDB EbookksqlDB UDFs and UDADs Made EasyUsing Apache Kafka as a Scalable, Event-Driven Backbone for Service ArchitecturesThe SEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo 🎧 Subscribe to Confluent Developer wherever you listen to podcasts. ▶️ Subscribe on YouTube, and hit the 🔔 to catch new episodes. 👍 If you enjoyed this, please leave us a rating. 🎧 Confluent also has a podcast for tech leaders: "Life Is But A Stream" hosted by our friend, Joseph Morais.
undefined
Jul 27, 2021 • 25min

Collecting Data with a Custom SIEM System Built on Apache Kafka and Kafka Connect ft. Vitalii Rudenskyi

The best-informed business insights that support better decision-making begin with data collection, ahead of data processing and analytics. Enterprises nowadays are engulfed by data floods, with data sources ranging from cloud services, applications, to thousands of internal servers. The massive volume of data that organizations must process presents data ingestion challenges for many large companies. In this episode, data security engineer, Vitalli Rudenskyi, discusses the decision to replace a vendor security information and event management (SIEM) system by developing a custom solution with Apache Kafka® and Kafka Connect for a better data collection strategy.Having a data collection infrastructure layer is mission critical for Vitalii and the team in helping enterprises protect data and detect security events. Building on the base of Kafka, their custom SIEM infrastructure is configurable and designed to be able to ingest and analyze huge amounts of data, including personally identifiable information (PII) and healthcare data. When it comes to collecting data, there are two fundamental choices: push or pull. But how about both? Vitalii shares that Kafka Connect API extensions are integral to data ingestion in Kafka. The three key components to allow their SIEM system to collect and record daily by pushing and pulling: NettySource Connector: A connector developed to receive data from different network devices to Apache Kafka. It helps receive data using both the TCP and UDP transport protocols and can be adopted to receive any data from Syslog to SNMP and NetFlow.PollableAPI Connector: A connector made to receive data from remote systems, pulling data from different remote APIs and services.Transformations Library: These are useful extensions to the existing out-of-the-box transformations. Approach with “tag and apply” transformations that transform data into the right place in the right format after collecting data.Listen to learn more as Vitalii shares the importance of data collection and the building of a custom solution to address multi-source data management requirements. EPISODE LINKSFeed Your SIEM Smart with Kafka ConnectTo Pull or to Push Your Data with Kafka Connect? That Is the Question.Free Kafka Connect 101 CourseSyslog Source Connector for Confluent PlatformJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudSEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo 🎧 Subscribe to Confluent Developer wherever you listen to podcasts. ▶️ Subscribe on YouTube, and hit the 🔔 to catch new episodes. 👍 If you enjoyed this, please leave us a rating. 🎧 Confluent also has a podcast for tech leaders: "Life Is But A Stream" hosted by our friend, Joseph Morais.
undefined
Jul 22, 2021 • 29min

Consistent, Complete Distributed Stream Processing ft. Guozhang Wang

Stream processing has become an important part of the big data landscape as a new programming paradigm to implement real-time data-driven applications. One of the biggest challenges for streaming systems is to provide correctness guarantees for data processing in a distributed environment. Guozhang Wang (Distributed Systems Engineer, Confluent) contributed to a leadership paper, along with other leaders in the Apache Kafka® community, on consistency and completeness in streaming processing in Apache Kafka in order to shed light on what a reimagined, modern infrastructure looks like. In his white paper titled Consistency and Completeness: Rethinking Distributed Stream Processing in Apache Kafka, Guozhang covers the following topics in his paper: Streaming correctness challengesStream processing with KafkaExactly-once in Kafka StreamsFor context, accurate, real-time data stream processing is more friendly to modern organizations that are composed of vertically separated engineering teams. Unlike in the past, stream processing was considered as an auxiliary system to normal batch processing oriented systems, often bearing issues around consistency and completeness. While modern streaming engines, such as ksqlDB and Kafka Streams are designed to be authoritative, as the source of truth, and are no longer treated as an approximation, by providing strong correctness guarantees. There are two major umbrellas of the correctness of guarantees: Consistency: Ensuring unique and extant recordsCompleteness: Ensuring the correct order of records, also referred to as exactly-once semantics. Guozhang also answers the question of why he wrote this academic paper, as he believes in the importance of knowledge sharing across the community and bringing industry experience back to academia (the paper is also published in SIGMOD 2021, one of the most important conference proceedings in the data management research area). This will help foster the next generation of industry innovation and push one step forward in the data streaming and data management industry. In Guozhang's own words, "Academic papers provide you this proof of concept design, which gets groomed into a big system."EPISODE LINKSWhite Paper: Rethinking Distributed Stream Processing in Apache KafkaBlog: Rethinking Distributed Stream Processing in Apache KafkaEnabling Exactly-Once in Kafka StreamsWhy Kafka Streams Does Not Use Watermarks ft. Matthias SaxStreams and Tables: Two Sides of the Same CoinSEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo 🎧 Subscribe to Confluent Developer wherever you listen to podcasts. ▶️ Subscribe on YouTube, and hit the 🔔 to catch new episodes. 👍 If you enjoyed this, please leave us a rating. 🎧 Confluent also has a podcast for tech leaders: "Life Is But A Stream" hosted by our friend, Joseph Morais.
undefined
Jul 15, 2021 • 26min

Powering Real-Time Analytics with Apache Kafka and Rockset

Using large amounts of streaming data increasingly requires interactive, real-time analytics and dashboards—and this applies to any industry, including tech. CTO and Co-Founder of Rockset Dhruba Borthakur shares how his company uses Apache Kafka® to perform complex joins, search, and aggregations on streaming data with low latencies. The Kafka database integrations allow his team to make a cloud-native analytics database that is a fundamental piece of enterprise infrastructure. Especially in e-commerce, logistics and manufacturing apps are typically receiving over 20 million events a day. As those events roll in, it is even more critical for real-time indexing to be queried with low latencies. This way, you can build high-performing and scalable dashboards that allow your organization to use clickstream and behavioral data to inform decisions and responses to consumer behavior. Typically, the data follow these steps:Events come in from mobile or web apps, such as clickstream or IoT dataThe app data is sent to the cloudData is fed into the database in real timeThis information is shared live on a dashboard or via SaaS application embedsFor example, when working with real-time analytics in real-time databases, both need to be continuously synced for optimal performance. If the latency is too significant, there can be a missed opportunity to interact with customers on their platform. You may want to write queries that join streaming data across transactional data or historical data lakes, even for complex analytics. You always want to make sure that the database performs at a speed and scale appropriate for customers to have a seamless experience. Using Rockset, you can write ANSI SQL on semi-structured and schemaless data. This way, you can achieve those complex joins with low latencies. Further data is required to supplement streaming data, but it can be easily supported through supported integrations. By having a solution for database requirements that are easily integrated and provide the correct data, you can make better decisions and maximize the result. EPISODE LINKSReal-Time Analytics and Monitoring Dashboards with Apache Kafka and RocksetWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Kafka streaming in 10 minutes on Confluent CloudUse 60PDCAST to get an additional $60 of free Confluent Cloud usage (details)SEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo 🎧 Subscribe to Confluent Developer wherever you listen to podcasts. ▶️ Subscribe on YouTube, and hit the 🔔 to catch new episodes. 👍 If you enjoyed this, please leave us a rating. 🎧 Confluent also has a podcast for tech leaders: "Life Is But A Stream" hosted by our friend, Joseph Morais.
undefined
Jul 8, 2021 • 30min

Automated Event-Driven Architectures and Microservices with Apache Kafka and SmartBear

Is it possible to have automated adoption of your event-driven architectures and microservices? The answer is yes! Alianna Inzana, product leader for API testing and virtualization at SmartBear, uses this evolutionary model to make event services reusable, functional, and strategic for both in-house needs and clients. SmartBear relies on Apache Kafka® to drive its automated microservices solutions forward through scaled architecture and adaptive workflows. Although the path to adoption may be different across use case and client requirements, it is all about maturity and API lifecycle management. As your services mature and grow, so should your event streaming architecture. The data your organization collects is no longer in a silo—rather, it has to be accessible across several events. The best architecture can handle these fluctuations. Alianna explains that although the result of these requirements is an architectural pattern, it doesn’t start that way. Instead, these automation processes begin as coding patterns on isolated platforms. You cannot rush code development at the coding stage because you never truly know how it will work for the end system. Testing must be done at each step of the implementation to ensure that event-driven architectures work for each step and variation of the service. The code will be altered as needed throughout the trial phase. Next, the product and development teams compare what architecture you currently have to where you’d like it to be. It is all about the product and how you’d like to scale it. The tricky part comes in the trial and error of bringing on each product and service one by one. However, once your offerings and architecture are synced, you’re saving time and effort not building something new for every microservice. As a result of event-driven architectures, you can minimize duplicate efforts and adapt your business offerings to them as the need arises. This is a strategic step for any organization looking to have a cohesive product offering. Architecture automation allows for flexible features that scale with your event services. Once these are in place, a company can use and grow them as needed. With innovative and adaptable event-driven architectures, organizations can grow with clients and scale the backend system as required. EPISODE LINKSExploring Event-Driven Architectures: Why Quality MattersApache Kafka + Event-Driven Architecture Support in ReadyAPIWatch the video version of this podcastSEASON 2 Hosted by Tim Berglund, Adi Polak and Viktor Gamov Produced and Edited by Noelle Gallagher, Peter Furia and Nurie Mohamed Music by Coastal Kites Artwork by Phil Vo 🎧 Subscribe to Confluent Developer wherever you listen to podcasts. ▶️ Subscribe on YouTube, and hit the 🔔 to catch new episodes. 👍 If you enjoyed this, please leave us a rating. 🎧 Confluent also has a podcast for tech leaders: "Life Is But A Stream" hosted by our friend, Joseph Morais.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app