Cloud Engineering Archives - Software Engineering Daily cover image

Cloud Engineering Archives - Software Engineering Daily

Latest episodes

undefined
Sep 19, 2018 • 56min

Kubernetes Distributions with Brian Gracely and Michael Hausenblas

Kubernetes is an open source container management system. Kubernetes is sometimes described as “the Linux of distributed systems” and this description makes sense: the large numbers of users and contributors in the Kubernetes community is comparable to the volume of Linux adopters in its early days. There are many different distributions of Linux: Ubuntu, Red Hat, Chromium OS. These different operating system distributions were created to fulfill different needs. Linux is used for Raspberry Pis, Android phones, and enterprise workstations. These different use cases require different configurations of an operating system. Similarly, there are different distributions of Kubernetes because there are different types of distributed systems. The internal infrastructure of a cloud provider might use one type of Kubernetes to serve users running application containers. A network of smart security cameras might be networked together with a different distribution of Kubernetes. Brian Gracely and Michael Hausenblas join the show today to discuss Kubernetes distributions. Brian and Michael work at Red Hat, which helps maintain OKD (formerly “Origin”), the Red Hat Community Distribution of Kubernetes, upon which Red Hat OpenShift is based. OpenShift is a platform as a service that enterprises use to deploy and manage their applications. Full disclosure: Red Hat is a sponsor of Software Engineering Daily.   Show Notes Brian Gracely, Author at OpenShift Blog Michael Hausenblas 12 Kubernetes distributions leading the container revolution | InfoWorld Find the Perfect Kubernetes Distribution – The New Stack Kubernetes distributions vs. the open source option Cloud Native Computing Foundation Launches Certified Kubernetes Program with 32 Conformant Distributions and Platforms – Cloud Native Computing Foundation Enterprise Kubernetes – Red Hat OpenShift An Introduction to Enterprise Kubernetes The coming of the Kubernetes distributions · More than seven CNCF Cloud Native Interactive Landscape Kubernetes Distributions and ‘Kernels’ – Tim Hockin & Michael Rubin, Google – YouTube The post Kubernetes Distributions with Brian Gracely and Michael Hausenblas appeared first on Software Engineering Daily.
undefined
Sep 18, 2018 • 44min

Continuous Delivery Pipelines with Abel Wang

Continuous integration and delivery allows teams to move faster by allowing developers to ship code independently of each other. A multi-stage CD pipeline might consist of development, staging, testing, and production. At each of these stages, a new piece of code undergoes additional tests, so that when the code finally makes it to production, the developers can be certain it won’t break the rest of the project. In a company, the different engineers working on a software project are given the permissions to ship code through a continuous delivery pipeline. Employees at a company have a strong incentive not to push buggy code to production. But what about open source contributors? What does the ideal continuous delivery workflow look like for an open source project? Abel Wang works on Azure Pipelines, a continuous integration and delivery tool from Microsoft. Azure Pipelines is designed to work with open source projects as well as companies. Abel joins the show to talk about using continuous integration and delivery within open source, and the process of designing a CI/CD tool that can work in any language and environment. Full disclosure: Microsoft is a sponsor of SE Daily.   Show Notes Create a CI/CD pipeline for your app with the Azure DevOps Project | Microsoft Docs Define a multi-stage CD release process | Microsoft Docs Continuous Integration and Continuous Delivery | Visual Studio Team Services Build and deploy your app – examples | Microsoft Docs Extensions for Visual Studio family of products | Visual Studio Marketplace What is Continuous Integration? – Azure DevOps | Microsoft Docs What is Continuous Delivery? – Azure DevOps | Microsoft Docs Deploy a Docker container app to an Azure web app | Microsoft Docs Getting started with Azure DevOps Projects to setup CI/CD pipeline for ASP.NET Core & Containers | Microsoft Build 2018 | Channel 9 The post Continuous Delivery Pipelines with Abel Wang appeared first on Software Engineering Daily.
undefined
Sep 13, 2018 • 52min

Orchestrating Kubernetes with Chris Gaun

A company runs a variety of distributed systems applications such as Hadoop for batch processing jobs, Spark for data science, and Kubernetes for container management. These distributed systems tools can run on-prem, in a cloud provider, or in a hybrid system that uses on-prem and cloud infrastructure. Some enterprises use VMs, some use bare metal, some use both. Mesosphere is a company that was started to abstract the complexity of resource management away from the application developer. Instead of a developer managing virtual machines, provisioning cloud infrastructure, or wiring all that infrastructure together to run distributed applications, the developer spins up distributed applications like Kubernetes, Spark, or Jenkins on top of Mesosphere, and Mesosphere provisions the machines on the underlying infrastructure. Using Kubernetes on top of Mesos allows you to separate resource provisioning from the actual container orchestration. In a previous episode, we explored how Netflix uses Mesos with a container orchestrator on top to simplify the resource management of microservice application containers as well as data science workloads. Chris Gaun is a product manager at Mesosphere who helped build Kubernetes-as-a-service. In today’s show, he describes why it is useful to have separate layers for resource provisioning and container orchestration. He also talks about the difficulties of manually installing Kubernetes, and why Mesosphere built a Kubernetes-as-a-service product. Full disclosure: Mesosphere is a sponsor of Software Engineering Daily. The post Orchestrating Kubernetes with Chris Gaun appeared first on Software Engineering Daily.
undefined
Sep 10, 2018 • 49min

Kubernetes Continuous Deployment with Sheroy Marker

Engineering organizations can operate more efficiently by working with a continuous integration and continuous deployment workflow. Continuous integration is the process of automatically building and deploying code that gets pushed to a remote repository. Continuous deployment is the process of moving that code through a pipeline of environments, from dev to test to production. At each stage, the engineers feel increasingly safe that the code will not break the user experience. When a company adopts Kubernetes, the workflow for deploying software within that company might need to be refactored. If the company starts to deploy containers in production, and managing those containers using Kubernetes, the company will also want to have a testing pipeline that emulates the production environment using containers and Kubernetes. Sheroy Marker is the head of technology at ThoughtWorks products, where he works on GoCD, a continuous delivery tool. Sheroy joins the show to talk about how Kubernetes affects continuous delivery workflows, and the process of building out Kubernetes integrations for GoCD. We also discussed the landscape of continuous delivery tools–why there are so many continuous delivery tools, and the question of how to choose a continuous delivery product if you are implementing CD. Continuous delivery tooling is in some ways like the space of monitoring, logging, and analytics–there are lots of successful products in the market. Full disclosure: ThoughtWorks and GoCD are sponsors of Software Engineering Daily. The post Kubernetes Continuous Deployment with Sheroy Marker appeared first on Software Engineering Daily.
undefined
Aug 30, 2018 • 1h 5min

Kubernetes Impact with Clayton Coleman

Kubernetes is in production clusters around the world with hundreds of thousands of containers. Kubernetes provides a distributed systems management environment for small startups and giant enterprises with applications ranging from microservices to machine learning pipelines. Because the use cases are already so wide-ranging, and the project has had so much adoption, the focus of many of the Kubernetes core contributors is stability. Clayton Coleman joins the show to talk about the impact that Kubernetes is having on software engineering and the efforts of the community to improve stability. Clayton is the lead engineer for OpenShift, a platform-as-a-service from Red Hat. Autoscaling, monitoring, and etcd are a few of the topics we discuss. Improvements to each of these areas are making Kubernetes easier to work with. There is a possibility that the Prometheus monitoring system will get pulled into Kubernetes itself, and we explore the pros and cons of this architectural decision. From his experience working on OpenShift, Clayton also has a lot to share around the idea of a platform-as-a-service. Platform-as-a-service tooling can make enterprises significantly more  productive, serving as a layer between a cloud provider and a developer that is shipping application code. Cloud providers can be complex to learn how to work with. As enterprises adopt cloud more aggressively, they are using platform-as-a-service tools as an interface for developers to work with those clouds in a more opinionated way. Kubernetes is used as a foundation for platforms like OpenShift, because Kubernetes can orchestrate resources on a cloud in a way that makes it easier for a deployment to be multicloud, or portable between clouds. In our previous episode with Clayton 2 years ago, we covered the basics of OpenShift and the developments that were occurring around Kubernetes at the time. In today’s show we go deeper into how the Kubernetes ecosystem is evolving, and his personal experience working on OpenShift. Full disclosure: Red Hat (where Clayton works) is a sponsor of Software Engineering Daily. The post Kubernetes Impact with Clayton Coleman appeared first on Software Engineering Daily.
undefined
Aug 28, 2018 • 50min

Android Slices with Jason Monk

The main user interfaces today are the smartphone, the laptop, and the desktop computer. Some people today interact with voice interfaces, augmented reality, virtual reality, and automotive computer screens like the Tesla. In the future, these other interfaces will become more common. Developers will want to be able to expose their applications to these new interfaces. For example, let’s say I am a developer who builds a podcast playing app. I have a website and a mobile app, but what if I want to expose that app to a voice interface? Or, what if I want to expose a specific piece of functionality from that app, to make shortcuts easier? Android Slices are user interface components that expose pieces of application functionality to Google Search, Google Assistant, and other applications. Jason Monk is a software engineer who works on Android Slices at Google. Jason joins the show to discuss how mobile user interfaces are changing, the motivation behind Android Slices, and the engineering behind this newer building block for Android developers. The post Android Slices with Jason Monk appeared first on Software Engineering Daily.
undefined
Aug 27, 2018 • 50min

Helm with Michelle Noorali

Back in 2014, platform-as-a-service was becoming an increasingly popular idea. The idea of PaaS was to sit on top of infrastructure-as-a-service providers like Azure, AWS, or Google Cloud, and simplify some of the complexity of these infrastructure providers. Heroku had built a successful businesses from the idea of platform-as-a-service, and there was a widely held desire in the developer community to have an “open source Heroku.” One project that was working towards the idea of an open source platform-as-a-service was Deis. Deis made it easier for people to deploy and manage their applications, and it simplified some of the hard parts of container management. When Kubernetes came out, Deis got refactored to use Kubernetes under the hood for container orchestration. Deis was one of the first projects to use Kubernetes as a tool to build a platform-as-a-service, and the team that was working on Deis got very early exposure to the process of building a platform on top of Kubernetes. Michelle Noorali was one of the engineers on the Deis team. When Deis got acquired by Microsoft, Michelle was working on Helm, a package manager for distributed systems. Helm allows developers to deploy distributed applications on top of Kubernetes more easily. A few examples of distributed applications that can be deployed using Helm are Kafka, Prometheus, and IPFS. One reason Helm is so useful is that distributed systems are notoriously hard to configure and run. Since joining Microsoft, Michelle has continued to work on Helm. She is also a member of the Kubernetes Steering Committee and the board of the CNCF. Michelle joins the show to talk about her early experiences building PaaS and her perspective on the Kubernetes ecosystem. Full disclosure: Microsoft is a sponsor of Software Engineering Daily. The post Helm with Michelle Noorali appeared first on Software Engineering Daily.
undefined
Aug 24, 2018 • 1h 1min

Build Faster with Nader Dabit

Building software today is much faster than it was just a few years ago. The tools are higher level, and abstract away tasks that would have required months of development. Much of a developer’s time used to be spent optimizing databases, load balancers, and queueing systems in order to be able to handle the load created by thousands of users. Today, scalability is built into much of our infrastructure by default. We have had several years of infrastructure with automatic scalability, and some of the more recent advances in developer tooling are about convenience, and faster development time. Developers are spending less time dealing with the ambiguous idea of a “server” and more time interacting with well-defined APIs and data sources. A few examples are AppSync from Amazon Web Services and Firebase from Google. These tools are like databases with rich interactive functionality. Instead of having to create a server to listen to a database for changes and push notifications to users in response to those changes, AppSync and Firebase can be programmed to have this kind of functionality built in. There are many other examples of high level APIs, rich backends, and developer productivity tools that lead to shorter development time. What does this mean for developers? It means we can build much faster. We can prototype quickly for low amounts of money–without sacrificing quality. We can spend more time focusing on design, user experience, and business models and less time focusing on keeping the application up and running. Nader Dabit is a developer advocate at Amazon Web Services, and he returns to the show to discuss modern tooling, and how that tooling changes the potential for high output and fast iteration among developers. It is a strategic, philosophical discussion of how to build modern software. Show Notes State of React Native 2018 The post Build Faster with Nader Dabit appeared first on Software Engineering Daily.
undefined
Aug 21, 2018 • 40min

OLIO: Food Sharing with Lloyd Watkin

Food gets thrown away from restaurants, homes, catering companies, and any other place with a kitchen. Most of this food gets thrown away when it is still edible, and could provide nutrition to someone who is hungry. Just like Airbnb makes use of excess living capacity, OLIO was started to connect excess food with people who want to eat that food. There are numerous challenges with this idea. How do you control quality and ensure the food is safe? How do you make money as a business? How do you solve the chicken and egg problem, and make sure that you get hungry users and people with food to give away at the same time? Lloyd Watkin is a software engineer at OLIO, and he joins today’s episode to describe how the platform works, how it is built, and how the company plans to scale their large base of volunteers. It’s a fascinating set of operational and engineering issues. The post OLIO: Food Sharing with Lloyd Watkin appeared first on Software Engineering Daily.
undefined
Aug 14, 2018 • 49min

Infrastructure Monitoring with Mark Carter

At Google, the job of a site reliability engineer involves building tools to automate infrastructure operations. If a server crashes, there is automation in place to create a new server. If a service starts to receive a high load of traffic, there is automation in place to scale up the instances of that service. In order to create an automated response to an infrastructure problem, a site reliability engineer needs insights into that infrastructure. Every service needs tools around monitoring, alerting, debugging, and distributed tracing. One benefit of working at a large company like Google is that an engineer building a new product gets this kind of tooling by default. If I am hacking on a project at home, I have to set up all kinds of tools to help me diagnose and resolve problems. Setting up this tooling takes time, and requires expertise. Stackdriver is a set of tools and instrumentation that allows developers to monitor, debug, and inspect infrastructure. Stackdriver is based on the internal observability tools built for Google. Mark Carter is a group product manager at Google, and he joins the show to discuss site reliability engineering and the creation of Stackdriver. The post Infrastructure Monitoring with Mark Carter appeared first on Software Engineering Daily.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app