The InfoQ Podcast

InfoQ
undefined
Mar 13, 2020 • 31min

Gareth Rushgrove on Kubernetes as a Platform, Applications, and Security

In this podcast, Daniel Bryant sat down with Gareth Rushgrove, Director of Product Management at Snyk. Topics covered included Kubernetes as a platform, application abstractions, continuous delivery, and implementing good security practices in the cloud native space. Why listen to this podcast: - The value provided by Kubernetes depends on an organisation’s context. Kubernetes acts as both a series of lower-level building blocks for a platform, and also as a very powerful API for deploying and operating container-based applications. - Kubernetes provides several useful abstractions for engineers. For example, Pods, Deployments, and Services. However, Kubernetes doesn’t have an “application”-focused abstraction. Tools such as Helm and specifications like the Cloud Native Application Bundle (CNAB) are driving innovation in this space. - There is a large amount of open source Kubernetes tooling. This has been created by a range of vendors, groups, and individuals. Encouraging this diverse mix of participation is beneficial for the long-term health of the ecosystem. - The Cloud Native Computing Foundation (CNCF) provides a space for people to collaborate regardless of their current organisational affiliations. - Defining appropriate standards within the cloud native space is useful for enabling interoperability and providing common foundations for others to innovate on top of. - Security challenges within IT are socio-technical. Security teams working with cloud native technologies will benefit from continual learning, developing new skills, and researching new tools. For example, the defaults of Kubernetes aren’t necessarily secure, but this can be readily addressed with appropriate configuration. More on this: Quick scan our curated show notes on InfoQ https://bit.ly/38PLPFb You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq Check the landing page on InfoQ: https://bit.ly/38PLPFb
undefined
Mar 9, 2020 • 35min

Luca Mezzalira on Micro Frontends at DAZN

- A Micro frontends is an approach to developing frontends that attempts to take some of the same benefits from Microservices and apply them to frontend development. - Microfront ends can be developed with different technologies and ownership of components on a single view. However, DAZN took a vertical approach to build them. Each Micro frontend is loaded into an app shell that offers an API for crosscutting concerns. Only one Micro frontend is loaded at a time into the app shell. - The ‘Inverse Conway Maneuver’ recommends evolving your team and organizational structure to create the architecture you want. - DAZN derisks deployments by using canaries implemented with Lambda at the Edge on Cloudfront. For code deployments, each of the Micro frontends can be introduced with a limited scope and then expanded once proven stable. More on this: Quick scan our curated show notes on InfoQ https://bit.ly/38BQAC0 You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq Check the landing page on InfoQ: https://bit.ly/38BQAC0
undefined
Mar 2, 2020 • 34min

Zhamak Dehghani on Data Mesh, Domain-Oriented Data, and Building Data Platforms

In this podcast, Daniel Bryant sat down with Zhamak Dehghani, principal consultant, member of technical advisory board, and portfolio director at ThoughtWorks. Topics discussed included: the motivations for becoming a data-driven organization; the challenges of adapting legacy data platforms and ETL jobs; and how to design and build the next generation of data platforms using ideas from domain-driven design and product thinking, and modern platform principles such as self-service workflows. Why listen to this podcast: - Becoming a data-driven organization remains one of the top strategic goals of many organizations. Being able to rapidly run experiments and efficiently analyse the resulting data can provide a competitive advantage. - There are several “architecture failure modes” within existing enterprise data platforms. They are centralized and monolithic. The composition of data pipelines are often highly-coupled, meaning that a change to the data format will require a cascade of changes throughout the pipeline. And finally, the ownership of data platforms is often siloed and hyper-specialized. - The next generation of enterprise data platform architecture requires a paradigm shift towards ubiquitous data with a distributed data mesh. Instead of flowing the data from domains into a centrally owned data lake or platform, domains need to host and serve their domain datasets in an easily consumable way. - Domain data teams must apply product thinking to the datasets that they provide; considering their data assets as their products, and the rest of the organization's data scientists, ML and data engineers as their customers. The key to building the data infrastructure as a platform is (a) to not include any domain specific concepts or business logic, keeping it domain agnostic, and (b) make sure the platform hides all the underlying complexity and provides the data infrastructure components in a self-service manner. More on this: Quick scan our curated show notes on InfoQ https://bit.ly/39exTWl You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq Check the landing page on InfoQ: https://bit.ly/39exTWl
undefined
Feb 21, 2020 • 23min

Brittany Postnikoff on Security, Privacy, and Social Engineering with Robots

In this podcast, Daniel Bryant sat down with Brittany Postnikoff, a computer systems analyst specialising on the topics of robotics, embedded systems, and human-robot interaction. Topics discussed included: the rise of robotics and human-robot interaction within modern life, the security and privacy risks of robots used within this context, and the potential for robots to be used to socially engineer people. Why listen to this podcast: - Physical robots are becoming increasingly common in everyday life, for example, offering directions in airports, cleaning the floor in peoples’ homes, and acting as toys for children. - People often imbue these robots with human qualities, and they trust the authority granted to a robot. - Social engineering can involve the psychological manipulation of people into performing actions or divulging confidential information. This can be stereotyped by the traditional “con”. - As people are interacting with robots in a more human-like way, this can mean that robots can be used for social engineering. - A key takeaway for creators of robots and the associated software is the need to develop a deeper awareness of security and privacy issues. - Software included within robots should be patched to the latest version, and any data that is being stored or transmitted should be encrypted. - Creators should also take care when thinking about the human-robot UX, and explore the potential for unintended consequences if the robot is co-opted into doing bad things. More on this: Quick scan our curated show notes on InfoQ https://bit.ly/2v5QTav You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq Check the landing page on InfoQ: https://bit.ly/2v5QTav
undefined
7 snips
Feb 7, 2020 • 27min

Anurag Goel on Cloud Native Platforms, Developer Experience, and Scaling Kubernetes

Anurag Goel, Founder and CEO of Render, shares insights into revolutionizing cloud platforms. He discusses the evolution of developer experiences and how Render simplifies Kubernetes deployments with user-friendly configurations. The conversation highlights the need for reduced complexity in cloud management, moving away from traditional DevOps demands. Anurag also touches on zero-downtime deployment practices and the significance of observability in cloud-native tech, all while navigating competition against established providers.
undefined
Jan 31, 2020 • 31min

Greg Law on Debugging, Record & Replay of Data, and Hyper-Observability

In this podcast, Daniel Bryant sat down with Greg Law, CTO at Undo. Topics discussed included: the challenges with debugging modern software systems, the need for “hyper-observability” and the benefit of being able to record and replay exact application execution; and the challenges with implementing the capture of nondeterministic system data in Undo’s LiveRecorder product for JVM-based languages that are Just-In-Time (JIT) compiled. Why listen to this podcast: - Understanding modern software systems can be very challenging, especially when the system is not doing what is expected. When debugging an issue, being able to observe a system and look at logging output is valuable, but it doesn’t always provide all of the information a developer needs. Instead we may need “hyper observability”; the ability to “zoom into” bugs and replay an exact execution. - Being able to record all nondeterministic stimuli to an application -- such as user input, network traffic, interprocess signals, and threading operations -- allows for the replay of an exact execution of an application for debugging purposes. Execution can be paused, rewound, and replayed, and additional logging data can be added ad hoc. - Undo’s LiveRecorder allows for the capture of this nondeterministic data, and this can be exported and shared among development teams. The UndoDB debugger, which is based on the GNU Project Debugger, supports the loading of this data and the execution and debugging in forwards and reverse execution of the application. There is also support for other debuggers, such as that included within IntelliJ IDEA. - Advanced techniques like multi-process correlation reveal the order in which processes and threads alter data structures in shared memory, and thread fuzzing randomizes thread execution to reveal race conditions and other multi-threading defects. - The challenges of using this type of technology when debugging (micro)service-based application lies within the user experience i.e. how should the multiple process debugging experience be presented to a developer? Live Recorder currently supports C/C++, Go, Rust, Ada applications on Linux x86 and x86_64, with Java support available in alpha. Supporting the capture and replay of data associated with JVM language execution, which contain extra abstractions and are often Just-In-Time (JIT) compiled, presented extra challenges. More on this: Quick scan our curated show notes on InfoQ https://bit.ly/37XLUa0 You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq Check the landing page on InfoQ: https://bit.ly/37XLUa0
undefined
Jan 24, 2020 • 39min

Idit Levine Discussing Gloo, Service Mesh Interface, and Web Assembly Hub

Today on The InfoQ Podcast, Wes Reisz speaks with CEO and founder of Solo Idit Levine. The two discuss the Three Pillars of Solo around Gloo, their API gateway, interoperability of service meshes (including the work on Service Mesh Interface), and on extending Envoy with Web Assembly (and the recently announced Web Assembly Hub). Why listen to this podcast: - Gloo is a Kubernetes-native ingress controller and API gateway. It’s built on top of Envoy and at its core is open source. - The Service Mesh Interface (SMI) is a specification for service meshes that runs on Kubernetes. It defines a common standard that can be implemented by a variety of providers. The idea of SMI is it’s an abstraction on top of service meshes, so that you can use one language to configure them all. - Autopilot is an open-source Kubernetes operator that allows developers to extend a service mesh control plane. - Lua has been commonly used to extend the service mesh data plane. Led by Google and the Envoy community, web assembly is becoming the preferred way of extending the data plane. Web assembly allows you to write Envoy extensions in any language while still being sandboxed and performant. - WebAssembly Hub is a service for building, deploying, sharing, and discovering Wasm extensions for Envoy. - Wasme is a docker like an open-source commandline tool from Solo to simplify the building, pushing, pulling, and deploying Envoy Web Assembly Filters. More on this: Quick scan our curated show notes on InfoQ https://bit.ly/37sYIoE You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq Check the landing page on InfoQ: https://bit.ly/37sYIoE
undefined
Jan 17, 2020 • 29min

Gunnar Morling on Change Data Capture and Debezium

Today, on The InfoQ Podcast, Wes Reisz talks with Gunnar Morling. Gunnar is a software engineer at RedHat and leads the Debezium project. Debezium is an open-source distributed platform for change data capture (CDC). On the show, the two discuss the project and many of its use cases. Additionally, topics covered on the podcast include bootstrapping, configuration, challenges, debugging, and operational modes. The show wraps with long term strategic goals for the project. Why listen to this podcast: - CDC is a set of software design patterns used to react to changing data in a data store. Used for things like internal changelogs, integrations, replication, and event streaming, CDC can be implemented leveraging queries or against the DB transaction log. Debezium leverages the transaction log to implement CDC and is extremely performant. - Debezium has mature source and sink connectors for MySQL, SQL Server, and MongoDB. In addition, there are Incumbating connectors for Cassandra, Oracle, and DB2. Community sink connectors have been created for ElasticSearch. - In a standard deployment, Debezium leverages a Kafka cluster by deploying connectors into Kafka Connect. The connectors establish a connection to the source database and then write changes to a Kafka topic. - Debezium can be run in embedded mode. Embedded mode imports Java library into your own project and leverages callbacks for change events. The library approach allows Debezium implementations against other tools like AWS Kinesis or Azure's Event Hub. Going forward, there are plans to make a ready-made Debezium runtime. - Out of the box, Debezium has a one-to-one mapping between tables and Kafka topic queues. The default approach exposes the internal table structure to the outside. One approach to address exposing DB internals is to leverage the Outbox Pattern. The Outbox Pattern uses a separate outbox table as a source. Inserts into your normal business logic tables also make writes to the outbox. Change events are then published to Kafka from the outbox source table. More on this: Quick scan our curated show notes on InfoQ https://bit.ly/3737GZB You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq Check the landing page on InfoQ: https://bit.ly/3737GZB
undefined
Jan 10, 2020 • 26min

Kelsey Hightower on Extending Kubernetes, Event-Driven Architecture, and Learning

In this podcast, Daniel Bryant sat down with Kelsey Hightower, Staff Developer Advocate at Google. Topics covered included: the extensibility of Kubernetes, and why it has become the platform that other platforms are being built on top of; creating event-driven architectures and deploying these onto Function-as-a-Service (FaaS) platforms like the Kubernetes-based Knative and Google Cloud Run; and the benefits of learning, sharing knowledge, and building communities. Why listen to this podcast: - Kubernetes is a platform for building platforms. It may not be as opinionated as traditional Platform-as-a-Service (PaaS) offerings, but it has become popular due to its extensibility. There are PaaS-like solutions built on top of Kubernetes, such as OpenShift, Knative, and Cloud Run. - The creation of common interfaces within Kubernetes -- such as Custom Resource Definitions (CRDs), Container Networking Interface (CNI), and Container Runtime Interface (CRI) -- enabled the adoption of the platform by vendors and the open source community without everyone needing to agree on exactly how to implement extensions. - Although not every workload can be effectively implemented using an event-driven architecture, for those that can the Kubernetes-based Function-as-a-Service (FaaS) platforms like Knative and Cloud Run can handle a lot of the operational management tasks for developers. - Engineers may be able to get ~90% of the “service mesh” traffic management functionality they need from using a simple proxy. However, the separation of the control and data planes within modern service meshes, in combination with the rise in popularity of the sidecar deployment model, has provided many benefits within Kubernetes. - A lot of learning within software development and information technology is transferable. If you spend time going deep in a technology when you begin your career, much of what you learn will be useful when you come to learn the next technology. More on this: Quick scan our curated show notes on InfoQ https://bit.ly/30alHC1 You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq Check the landing page on InfoQ: https://bit.ly/30alHC1
undefined
Jan 3, 2020 • 31min

Katie Gamanji on Condé Nast’s Kubernetes Platform, Self-Service, and the Federation and Cluster APIs

In this podcast, Daniel Bryant sat down with Katie Gamanji, Cloud Platform Engineer at Condé Nast International. Topics covered included: exploring the architecture of the Condé Nast Kubernetes-based platform; the importance of enabling self-service deployment for developers; and how the Kubernetes’ Federation API and Cluster API may enable more opportunities for platform automation. - Founded in the early 1900s, Condé Nast is a global media company that has recently migrated their application deployment platforms from individually-curated geographically-based platforms, to a standardised distributed platform based on Kubernetes and AWS. - The Condé Nast engineering team create and manage their own Kubernetes clusters, currently using CoreOS’s/Red Hat’s Tectonic tool. Self-service deployment of applications is managed via Helm Charts. - The platform team works closely with their “customer” developer teams in order to ensure their requirements are being met. - The Kubernetes Federation API makes it easy to orchestrate the deployment of applications to multiple clusters. This works well for cookie-cutter style deployments that only require small configuration differences, such as scaling the number of running applications based on geographic traffic patterns. - The Cluster API is a Kubernetes project to bring declarative APIs to cluster creation, configuration, and management. This enables more effective automation for cluster lifecycle management, and may provide more opportunities for multi-cloud Kubernetes use. - The Condé Nast platform Kubernetes Ingress is handled by Traefik, due to the good Helm support and cloud integration (for example, AWS Route 53 and IAM rule synchronization). The platform team is exploring the use of service mesh for 2020. - Abstractions, interfaces, and security will be interesting focal points for improvement in the Kubernetes ecosystem in 2020. More on this: Quick scan our curated show notes on InfoQ https://bit.ly/2FeYPrE You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq Check the landing page on InfoQ: https://bit.ly/2FeYPrE

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app