

The New Stack Podcast
The New Stack
The New Stack Podcast is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software.
For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack
For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack
Episodes
Mentioned books

Dec 14, 2022 • 41min
Redis Looks Beyond Cache Toward Everything Data
Redis, best known as a data cache or real-time data platform, is evolving into much more, Tim Hall, chief of product at the company told The New Stack in a recent TNS Makers podcast. Redis is an in-memory database or memory-first database, which means the data lands there and people are using us for both caching and persistence. However, these days, the company has a number of flexible data models, but one of the brand promises of Redis is developers can store the data as they're working with it. So as opposed to a SQL database where you might have to turn your data structures into columns and tables, you can actually store the data structures that you're working with directly into Redis, Hall said. Primary Database? “About 40% of our customers today are using us as a primary database technology,” he said. “That may surprise some people if you're sort of a classic Redis user and you knew us from in-memory caching, you probably didn't realize we added a variety of mechanisms for persistence over the years.” Meanwhile, to store the data, Redis does store it on disk, sort of behind the scenes while keeping a copy in memory. So if there's any sort of failure, Redis can recover the data off of disk and replay it into memory and get you back up and running. That's a mechanism that has been around about half a decade now. Yet, Redis is playing what Hall called the ‘long game', particularly in terms of continuing to reach out to developers and showing them what the latest capabilities are. “If you look at the top 10 databases on the planet, they've all moved into the multimodal category. And Redis is no different from that perspective” Hall said. “So if you look at Oracle it was traditionally a relational database, Mongo is traditionally JSON documents store only, and obviously Redis is a key-value store. We've all moved down the field now. Now, why would we do that? We're all looking to simplify the developer’s world, right?” Yet, each vendor is really trying to leverage their core differentiation and expand out from there. And the good news for Redis is speed is its core differentiation. “Why would you want a slow data platform? You don't, Hall said. “So the more that we can offer those extended capabilities for working with things like JSON, or we just launched a data structure called t-digest, that people can use along and we've had support for Bloom filter, which is a probabilistic data structure like all of these things, we kind of expand our footprint, we're saying if you need speed, and reducing latency, and having high interactivity is your goal Redis should be your starting point. If you want some esoteric edge case functionality where you need to manipulate JSON in some very strange way, you probably should go with Mongo. I probably won't support that for a long time. But if you're just working with the basic data structures, you need to be able to query, you need to be able to update your JSON document. Those straightforward use cases we support very, very well, and we support them at speed and scale.” Customer View As a Redis customer, Alain Russell, CEO at Blackpepper, a digital e-commerce agency in Auckland, New Zealand, said his firm has undergone the same transition. “We started off as a Redis as a cache, that helped us speed up traditional data that was slower than we wanted it,” he said. “And then we went down a cloud path a couple of years ago. Part of that migration included us becoming, you know, what's deemed as ‘cloud native.’ And we started using all of these different data stores and data structures and dealing with all of them is actually complicated. You know, and from a developer perspective, it can be a bit painful.” So, Blackpepper started looking for how to make things simpler, but also keep their platform very fast and they looked at the Redis Stack. “And honestly, it filled all of our needs in one platform. And we're kind of in this path at the moment, we were using the basics of it. And we're very early on in our journey, right? We're still learning how things work and how to use it properly. But we also have a big list of things that we're using other data stores for traditional data, and working out, okay, this will be something that we will migrate to, you know, because we use persistent heavily now, in Redis.” Twenty-year-old Blackpepper works with predominantly traditional retailers and helps them in their omni-channel journey. Commercial vs. Open Source Hall said there are three modes of access to the Redis technology: the Redis open source project, the Redis Stack – which the company recommends that developers start with today -- and then there's Redis Enterprise Edition, which is available as software or in the cloud. “It's the most popular NoSQL database on the planet six years running,” Hall said. “And people love it because of its simplicity.” Meanwhile, it takes effort to maintain both the commercial product and the open source effort. Allen, who has worked at Hortonworks, InfluxData, said “Not every open source company is the same in terms of how you make decisions about what lands in your commercial offering and what lands in open source and where the contributions come from and who's involved.” For instance, “if there was something that somebody wanted to contribute that was going to go against our commercial interest, we probably not would not merge that,” Hall said. Redis was run by project founder Salvatore Sanfilippo, for many, many years, and he was the sole arbiter of what landed and what did not land in Redis itself. Then, over the last couple of years, Redis created a core steering committee. It's made up of one individual from AWS, one individual from Alibaba, and three Redis employees who look after the contributions that are coming in from the Redis open source community members who want to contribute those things. “And then we reconcile what we want from a commercial interest perspective, either upstream, or things that, frankly, may have been commoditized and that we want to push downstream into the open source offering, Hall said. “And so the thing that you're asking about is sort of my core existential challenge all the time, that is figuring out where we're going from a commercial perspective. What do we want to land there first? And how can we create a conveyor belt of commercial opportunity that keeps us in business as a software company, creating differentiation against potential competitors show up? And then over time, making sure that those things that do become commoditized, or maybe are not as differentiating anymore, I want to release those to the open source community. But this upstream/downstream kind of challenge is something that we're constantly working through.” Blackpepper was an open source Redis user initially, but they started a journey where they used Memcached to speed up data. Then they migrated to Redis when they moved to the AWS cloud, Russell said. Listen to the Podcast The Redis TNS Makers podcast goes on to look at the use of AI/ML in the platform, the acquisition of RESP.app, the importance of JSON and RediSearch, and where Redis is headed in the future. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Dec 7, 2022 • 26min
Couchbase’s Managed Database Services: Computing at the Edge
Let’s say you’re a passenger on a cruise ship. Floating in the middle of the ocean, far from reliable Wi-Fi, you wear a device that lets you into your room, that discreetly tracks your move from the bar to the dinner table to the pool and delivers your drink order wherever you are. You can buy sunscreen or toothpaste or souvenirs in the ship’s stores without touching anything. If you’re a Carnival Cruise Lines passenger, this is reality right now, in part because of the company’s partnership with Couchbase, according to Mark Gamble, product and solutions marketing director, Couchbase. Couchbase provides a cloud native, no SQL database technology that's used to power applications for customers including Carnival but also Amadeus, Comcast, LinkedIn, and Tesco. In Carnival’s case, Gamble said, “they run an edge data center on their ships to power their Ocean Medallion application, which they are super proud of. They use it a lot in their ads, because it provides a personalized service, which is a differentiator for them to their customers.” In this episode of The New Stack Makers, Gamble spoke to Heather Joslyn, features editor of TNS, about edge computing, 5G, and Couchbase Capella, its Database as a Service (DBaaS) offering for enterprises. This episode of Makers was sponsored by Couchbase.5G and Offline-First AppsThe goal of edge computing, Gamble told our podcast audience, is bring data and compute closer to the applications that consume it. This speeds up data processing, he said, “because data doesn't have to travel all the way to the cloud and back.” But it also has other benefits “This serves to make applications more reliable, because local data processing sort of removes internet slowness and outages from the equation,” he said. The innovation of 5G networks has also had a big impact on reducing latency and increasing uptime, Gamble said. “To compare with 4G, things like the average round trip data travel time between the device, and the cell tower is like 15 milliseconds. And with 5G, that latency drops to like two milliseconds. And 5G can support they say, a million devices, within a third of a mile radius, way more than what's possible with 4G.” But 5G, Gamble said, “really requires edge computing to realize its its full potential.” Increasingly, he said, Couchbase hears interest from its customers in building “offline-first” applications, which can run even in Wi-Fi dead zones. The use cases, he said, are everywhere: “When I pass a fast food restaurant, it's starting to become more common, where you'll see that, instead of just a box you're talking to, there's a person holding a tablet, and they walk down the line, and they're taking orders. And as they come closer to the restaurant, it syncs up with the kitchen. They find that just a better, more efficient way to serve customers. And so it becomes a competitive differentiator forum.” As part of Couchbase’s Capella product, it recently announced Capella App Service, a new capability for mobile developers, is a fully managed backend designed for mobile, Internet of Things (IoT) and edge applications. “Developers use it to access and sync data between the Database as a Service and their edge devices, as well as it handles authenticating and managing mobile and edge app users,” he said. Used in conjunction with Couchbase Lite, a lightweight, embedded NoSQL database used with mobile and IoT devices, Capella App Services synchronizes the data between backend and edge devices. Even for workers in remote areas, “eventually, you have to make sure that data updates are shared with the rest of the ecosystem,” Gamble said. “ And that's what App Services is meant to do, as conductivity allows — so during network disruptions in areas with no internet, apps will still continue to operate.” Check out the rest of the conversation to learn more about edge computing and the challenges Gamble thinks still need to be addressed in that space. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Dec 1, 2022 • 16min
Open Source Underpins A Home Furnishings Provider’s Global Ambitions
Wayfair describes itself as the “the destination for all things home: helping everyone, anywhere create their feeling of home.” It provides an online platform to acquire home furniture, outdoor decor and other furnishings. It also supports its suppliers so they can use the platform to sell their home goods, explained Natali Vlatko, global lead, open source program office (OSPO) and senior software engineering manager, for Wayfair as the featured guest in Detroit during KubeCon + CloudNativeCon North America 2022. “It takes a lot of technical, technical work behind the scenes to kind of get that going,” Vlatko said. This is especially true as Wayfair scales its operations worldwide. The infrastructure must be highly distributed, relying on containerization, microservices, Kubernetes, and especially, open source to get the job done. “We have technologists throughout the world, in North America and throughout Europe as well,” Vlatko said. “And we want to make sure that we are utilizing cloud native and open source, not just as technologies that fuel our business, but also as the ways that are great for us to work in now.” Open source has served as a “great avenue” for creating and offering technical services, and to accomplish that, Vlatko amassed the requite tallent, she said. Vlatko was able to amass a small team of engineers to focus on platform work, advocacy, community management and internally on compliance with licenses. About five years ago when Vlatko joined Wayfair, the company had yet to go “full tilt into going all cloud native,” Vlatko said. Wayfair had a hybrid mix of on-premise and cloud infrastructure. After decoupling from a monolith into a microservices architecture “that journey really began where we understood the really great benefits of microservices and got to a point where we thought, ‘okay, this hybrid model for us actually would benefit our microservices being fully in the cloud,” Vlatko said. In late 2020, Wayfair had made the decision to “get out of the data centers” and shift operations to the cloud, which was completed in October, Vlatko said. The company culture is such that engineers have room to experiment without major fear of failure by doing a lot of development work in a sandbox environment. “We've been able to create production environments that are close to our production environments so that experimentation in sandboxes can occur. Folks can learn as they go without actually fearing failure or fearing a mistake,” Vlatko said. “So, I think experimentation is a really important aspect of our own learning and growth for cloud native. Also, coming to great events like KubeCon + CloudNativeCon and other events [has been helpful]. We're hearing from other companies who've done the same journey and process and are learning from the use cases.” Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Nov 30, 2022 • 16min
ML Can Prevent Getting Burned For Kubernetes Provisioning
In the rush to create, provision and manage Kubernetes, often left out is proper resource provisioning. According to StormForge, a company paying, for example, a million dollars a month on cloud computing resources is likely wasting $6 million a year of resources on the cloud on Kubernetes that are left unused. The reasons for this are manifold and can vary. They include how DevOps teams can tend to estimate too conservatively or aggressively or overspend on resource provisioning. In this podcast with StormForge’s Yasmin Rajabi, vice president of product management, and Patrick Bergstrom CTO, we look at how to properly provision Kubernetes resources and the associated challenges. The podcast was recorded live in Detroit during KubeCon + CloudNativeCon Europe 2022. Rethinking Web Application Firewalls Almost ironically, the most commonly used Kubernetes resources can even complicate the ability to optimize resources for applications.The processes typically involve Kubernetes resource requests and limits, and predicting how the resources might impact quality of service for pods. Developers deploying an application on Kubernetes often need to set CPU-request, memory-request and other resource limits. “They are usually like ‘I don't know — whatever was there before or whatever the default is,’” Rajabi said. “They are in the dark.” Sometimes, developers might use their favorite observability tool and say “‘we look where the max is, and then take a guess,’” Rajabi said. “The challenge is, if you start from there when you start to scale that out — especially for organizations that are using horizontal scaling with Kubernetes — is that then you're taking that problem and you're just amplifying it everywhere,” Rajabi said. “And so, when you've hit that complexity at scale, taking a second to look back and ‘say, how do we fix this?’ you don't want to just arbitrarily go reduce resources, because you have to look at the trade off of how that impacts your reliability.” The process then becomes very hit or miss. “That's where it becomes really complex, when there are so many settings across all those environments, all those namespaces,” Rajabi said. “It's almost a problem that can only be solved by machine learning, which makes it very interesting.” But before organizations learn the hard way about not automating optimizing deployments and management of Kubernetes, many resources — and costs — are bared to waste. “It's one of those things that becomes a bigger and bigger challenge, the more you grow as an organization,” Bergstrom said. Many StormForge customers are deploying into thousands of namespaces and thousands of workloads. “You are suddenly trying to manage each workload individually to make sure it has the resources and the memory that it needs,” Bergstrom said. “It becomes a bigger and bigger challenge.” The process should actually be pain free, when ML is properly implemented. With StormForge’s partnership with Datadog, it is possible to apply ML to collect historical data, Bergstrom explained. “Then, within just hours of us deploying our algorithm into your environment, we have machine learning that's used two to three weeks worth of data to train that can then automatically set the correct resources for your application. This is because we know what the application is actually using,” Bergstrom said. “We can predict the patterns and we know what it needs in order to be successful.” Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Nov 29, 2022 • 27min
What’s the Future of Feature Management?
Feature management isn’t a new idea but lately it’s a trend that’s picked up speed. Analysts like Forrester and Gartner have cited adoption of the practice as being, respectively, “hot” and “the dominant approach to experimentation in software engineering.” A study released in November found that 60% of 1,000 software and IT professionals surveyed started using feature flags only in the past year, according to the report sponsored by LaunchDarkly, the feature management platform and conducted by Wakefield Research. At the heart of feature management are feature flags, which give organizations the ability to turn features on and off, without having to re-deploy an entire app. Feature flags allow organizations test new features, and control things like access to premium versions of a customer-facing service. An overall feature management practice that includes feature flags allows organizations “to release progressively any new feature to any segment of users, any environment, any cohort of customers in a controlled manner that really reduces the risk of each release,” said Ravi Tharisayi, senior director of product marketing at LaunchDarkly, in this episode of The New Stack Makers podcast. Tharisayi talked to The New Stack’s features editor, Heather Joslyn, about the future of feature management, on the eve of the company’s latest Trajectory user conference. This episode of Makers was sponsored by LaunchDarkly.Streamlining Management, Saving MoneyThe participants in the new survey worked at companies of at least 200 employees, and nearly all of them that use feature flags — 98%— said they believe they save their organizations money and demonstrate a return on investment. Furthermore, 70% said that their company views feature management as either a mission-critical or a high-priority investment. Fielding the annual survey, Tharisayi said, has offered a window into how organizations are using feature flags. Fifty-five percent of customers in the 2022 survey said they use feature flags as long-term operational controls — for API rate limiting, for instance, to prioritize certain API calls in high-traffic situations. The second most common use, the survey found — cited by 47% of users — was for entitlements, “managing access to different types of plans, premium plans versus other plans, for example,” Tharisayi said. “This is really a powerful capability because of this ability to allow product managers or other personas to manage who has access to certain features to certain plans, without having to have developers be involved,” he said. “Previously, that required a lot of developer involvement.”Experimentation, Metrics, Cultural ShiftsLaunchDarkly, Tharisayi said, has been investing in and improving its platform’s experimentation and measurement capabilities: “At the core of that is this notion that experimentation can be a lot more successful when it's tightly integrated to the developer workflow.” As an example, he pointed to CCP Games, makers of the gaming platform EVE Online, which serves millions of players. “They were recently thinking through how to evolve their recommendation engine, because they wanted this engine to recommend actions for their gamers that will hopefully increase their ultimate North Star metric,” its tracking of how much time gamers spend with their games. By using LaunchDarkly’s platform, CCP was able to run A/B tests and increase gamers’ session lengths and engagement. ”So that's the kind of capability that we think is going to be an increasing priority,” Tharisayi said. As feature management matures and standardizes, he said, he pointed to the adoption of DevOps as a model and cautionary tale. ”When it comes to cultural shifts, like DevOps or feature management that require teams to work in a different way, oftentimes there can be early success with a small team,” Tharisayi said “But then there can be some cultural and process barriers as you're trying to standardize to the team level and multi-team level, before figuring out the kinks in deploying it at an organization-wide level.” He added, “that's one of the trends that we observed a little bit in this survey, is that there are some cultural elements to getting success at scale, with something like feature management and the opportunity as an industry to support organizations as they're making that quest to standardize a practice like this, like any other cultural practice.” Check out the full episode for more on the survey and on what’s next for feature management. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Nov 23, 2022 • 16min
Chronosphere Nudges Observability Standards Toward Maturity
DETROIT — Rob Skillington’s grandfather was a civil engineer, working in an industry that, in over a century, developed processes and know-how that enabled the creation of buildings, bridges and road. “A lot of those processes matured to a point where they could reliably build these things,” said Skillington, co-founder and chief technology officer at Chronosphere, an observability platform. “And I think about observability as that same maturity of engineering practice. When it comes to building software that actually is useful in the world, it is this process that helps you actually achieve the deployment and operation of these large scale systems that we use every day.” Skillington spoke about the evolution of observability, and his company’s recent donation of an open source project to Prometheus, in this episode of The New Stack Makers podcast. Heather Joslyn, features editor of TNS, hosted the conversation. This On the Road edition of The New Stack Makers was recorded at KubeCon + CloudNativeCon North America, in the Motor City. The episode was sponsored by Chronosphere.A Donation to the Prometheus ProjectHelping observability practices grow as mature and reliable as civil engineering rules that help build sturdy skyscrapers is a tough task, Skillington suggested. In the cloud era, he said, “you have to really prepare the software for a whole set of runtime environments. And so the challenges around that is really about making it consistent, well understood and robust.” At KubeCon in late October, Chronosphere and PromLabs (founded by Julius Volz, creator of Prometheus) announced that they had donated their open source project PromLens to the Prometheus project, the open source monitoring and alerts primitive. The donation is a way of placing a bet on a tool that integrates well with Kubernetes. “There's this real yearning for essentially a standard that can be built upon by everyone in the industry, when it comes to these core primitives, essentially,” Skillington said. “And Prometheus is one of those primitives. We want to continue to solidify that as a primitive that stands the test of time.” “We can't build a self-driving car if we're always building a different car,” he added. PromLens builds Prometheus queries in a sort of integrated development environment (IDE), Skillington said. It also makes it easier for more people in an organization to create queries and understand the meaning and seriousness of alerts. The PromLens tool breaks queries into a visual format, and allows users to edit them through a UI. “Basically, it's kind of like a What You See Is What You Get editor, or WYSIWYG editor, for Prometheus queries,” Skillington said. “Some of our customers have tens of thousands of these alerts to find in PromQL, which is the query language for Prometheus,” he noted. “Having a tool like an integrated development environment — where you can really understand these complex queries and iterate faster on, setting these up and getting back to your day job — is incredibly important.” Check out the full episode for more on PromLens and the current state of observability. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Nov 23, 2022 • 12min
How Boeing Uses Cloud Native
In this latest podcast from The New Stack, we spoke with Ricardo Torres, who is the chief engineer of open source and cloud native for aerospace giant Boeing. Torres also joined the Cloud Native Computing Foundation in May to serve as a board member. In this interview, recorded at KubeCon+CloudNativeCon last month, Torres speaks about Boeing's use of open source software, as well as its adoption of cloud native technologies. While we may think of Boeing as an airplane manufacturer, it would be more accurate to think of the company as a large-scale system integrator, one that uses a lot of software. So, like other large-scale companies, Boeing sees a distinct advantage in maintaining good relations with the open source community. "Being able to leverage the best technologists out there in the rest of the world is of great value to us strategically," Torres said. This strategy allows Boeing to "differentiate on what we do as our core business rather than having to reinvent the wheel all the time on all of the technology." Like many other large companies, Boeing has created an open source office to better work with the open source community. Although Boeing is primarily a consumer of open source software, it still wants to work with the community. "We want to make sure that we have a strategy around how we contribute back to the open source community, and then leverage those learnings for inner sourcing," he said. Boeing also manages how it uses open source internally, keeping tight controls on the supply chain of open source software it uses. "As part of the software engineering organization, we partner with our internal IT organization, to look at our internet traffic and assure nobody's going out and downloading directly from an untrusted repository or registry. And then we host instead, we have approved sources internally." It's not surprising that Boeing, which deals with a lot of government agencies, embraces the practice of using software bills of material (SBOMs), which provide a full listing of what components are being used in a software system. In fact, the company has been working to extend the comprehensiveness of SBOMs, according to Torres. " I think one of the interesting things now is the automation," he said of SBOMs. "And so we're always looking to beef up the heuristics because a lot of the tools are relatively naïve, and that they trust that the dependencies that are specified are actually representative of everything that's delivered. And that's not good enough for a company like Boeing. We have to be absolutely certain that what's there is exactly what did we expected to be there."Cloud Native ComputingWhile Boeing builds many systems that reside in private data centers, the company is also increasingly relying on the cloud as well. Earlier this year, Boeing had signed agreements with the three largest cloud service providers (CSPs): Amazon Web Services, Microsoft Azure and the Google Cloud Platform. "A lot of our cloud presence is about our development environments. And so, you know, we have cloud-based software factories that are using a number of CNCF and CNCF-adjacent technologies to enable our developers to move fast," Torres said. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Nov 22, 2022 • 14min
Case Study: How Dell Technologies Is Building a DevRel Team
DETROIT — Developer relations, or DevRel to its friends, is not only a coveted career path but also essential to helping developers learn and adopt new technologies. That guidance is a matter of survival for many organizations. The cloud native era demands new skills and new ways of thinking about developers and engineers’ day-to-day jobs. At Dell Technologies, it meant responding to the challenges faced by its existing customer base, which is “very Ops centric — server admins, system admins,” according to Brad Maltz, of Dell. With the rise of the DevOps movement, “what we realized is our end users have been trying to figure out how to become infrastructure developers,” said Maltz, the company’s senior director of DevOps portfolio and DevRel. “They've been trying to figure out how to use infrastructure as code Kubernetes, cloud, all those things.” “And what that means is we need to be able to speak to them where they want to go, when they want to become those developers. That’s led us to build out a developer relations program ... and in doing that, we need to grow out the community, and really help our end users get to where they want to.” In this episode of The New Stack’s Makers podcast, Maltz spoke to Heather Joslyn, TNS features editor, about how Dell has, since August, been busy creating a DevRel team to aid its enterprise customers seeking to adopt DevOps as a way of doing business. This On the Road edition of Makers, recorded at KubeCon + CloudNativeCon North America in the Motor City, was sponsored by Dell Technologies. Recruiting Influencers Maltz, an eight-year veteran of Dell, has moved quickly in assembling his team, with three hires made by late October and a fourth planned before year’s end. That’s lightning fast, especially for a large, established company like Dell, which was founded in 1984. “There's two ways of building a DevOps team,” he said. “One way is to actually kind of go and try to homegrow people on the inside and get them more presence in the community. That's the slower road. “But we decided we have to go and find industry influencers that believe in our cause, that believe in the problem space that we live in. And that's really how we started this: we went out to find some very, very strong top talent in the industry and bring them on board.” In addition to spreading the DevOps solutions gospel at conferences like KubeCon, Maltz’s vision for the team is currently focused on social media and building out a website, developer.dell.com, which will serve as the landing page for the company’s DevRel knowledge, including links to community, training, how-to videos and an API marketplace. In building the team, the company made an unorthodox choice. “We decided to put Dev Rel into product management on the product side, not marketing,” Maltz said. “The reason we did that was we want the DevRel folks to really focus on community contributions, education, all that stuff. “But while they're doing that, their job is to bring the data back from those discussions they're having in the field back to product management, to enable our tooling to be able to satisfy some of those problems that they're bringing back so we can start going full circle.” Facing the Limits of ‘Shift Left’ The roles that Dell’s DevRel team is focusing on in the DevOps culture are site reliability engineers (SREs) and platform engineers. These not only align with its traditional audience of Ops engineers, but reflect a reality Dell is seeing in the wider tech world. “The reality is, application developers don't want to shift left, they don't want to operate. They don't want they want somebody else to take it, and they want to keep developing,” Maltz said. “where DevOps has transitioned for us is, how do we help those people that are kind of that operator turning into infrastructure developer fit into that DevOps culture?” The rise of platform engineering, he suggested, is a reaction to the endless choices of tools available to developers these days. “The notion is developers in the wild are able to use any tool on any cloud with any language, and they can do whatever they want. That's hard to support,” he said. “That's where DevOps got introduced, and was to basically say, Hey, we're gonna put you into a little bit of a box, just enough of a box that we can start to gain control and get ahead of the game. The platform engineering team, in this case, they're the ones in charge of that box.” But all of that, Maltz said, doesn’t mean that “shift left” — giving devs greater responsibility for their applications — is dead. It simply means most organizations aren’t ready for it yet: “That will take a few more years of maturity within these DevOps operating models, and other things that are coming down the road.” Check out the full episode for more from Maltz, including new solutions from Dell aimed at platform engineers and SREs and collaborations with Red Hat OpenShift. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

4 snips
Nov 17, 2022 • 31min
Kubernetes and Amazon Web Services
Cloud giant Amazon Web Services manages the largest number of Kubernetes clusters in the world, according to the company. In this podcast recording, AWS Senior Engineer Jay Pipes discusses AWS' use of Kubernetes, as well as the company's contribution to the Kubernetes code base. The interview was recorded at KubeCon North America last month.The Difference Between Kubernetes and AWSKubernetes is an open source container orchestration platform. AWS is one of the largest providers of cloud services. In 2021, the company generated $61.1 billion in revenue, worldwide. AWS provides a commercial Kubernetes service, called the Amazon Elastic Kubernetes Service (EKS). It simplifies the Kubernetes experience by adding a control plane and worker nodes. In addition to providing a commercial Kubernetes service, AWS supports the development of Kubernetes, by dedicating engineers to the work on the open source project. "It's a responsibility of all of the engineers in the service team to be aware of what's going on and the upstream community to be contributing to that upstream community, and making it succeed," Pipes said. "If the upstream open source projects upon which we depend are suffering or not doing well, then our service is not going to do well. And by the same token, if we can help that upstream project or project to be successful, that means our service is going to be more successful."What is Kubernetes in AWS?In addition to EKS, AWS has also a number of other tools to help Kubernetes users. One is Karpenter, an open-source, flexible, high-performance Kubernetes cluster autoscaler built with AWS. Karpenter provides more fine-grained scaling capabilities, compared to Kubernetes' built-in Cluster Autoscaler, Pipes said. Instead of using Cluster Autoscaler, Karpenter deploys AWS' own Fleet API, which offers superior scheduling capabilities. Another tool for Kubernetes users is cdk8s, which is an open-source software development framework for defining Kubernetes applications and reusable abstractions using familiar programming languages and rich object-oriented APIs. It is similar to the AWS Cloud Development Kit (CDK), which helps users deploy applications using AWS CloudFormation, but instead of the output being a CloudFormation template, the output is a YAML manifest that can be understood by Kubernetes.AWS and KubernetesIn addition to providing open source development help to Kubernetes, AWS has offered to help defray the considerable expenses of hosting the Kubernetes development and deployment process. Currently, the Kubernetes upstream build process is hosted on the Google Cloud Platform, and artifact registry is hosted in Google's container registry, and totals about 1.5TB worth of storage. Each month, AWS alone was paying $90-$100,000 a month for egress costs, just to have the Kubernetes code on an AWS-hosted infrastructure, Pipes said. AWS has been working on a mirror of the Kubernetes assets that would reside on the company's own cloud servers, thereby eliminating the Google egress costs typically borne by the Cloud Native Computing Foundation. "By doing that we completely eliminate the egress costs out of Google data centers and into AWS data centers," Pipes said. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Nov 16, 2022 • 13min
Case Study: How SeatGeek Adopted HashiCorp’s Nomad
LOS ANGELES — Kubernetes, the open source container orchestrator, may have a big footprint in the cloud native world, but some organizations are doing just fine without it. Take, for example, SeatGeek, which runs a mobile application that serves as a primary and secondary market for event tickets. For cloud infrastructure, the 12-year-old company’s workloads — which include non-containerized applications — have largely run on Amazon Web Services. A few years ago, it turned to HashiCorp’s Nomad, a scheduler built for running for apps whether they’re containerized or not. “In the beginning, we had a platform that an engineer would deploy something to but it was very constrained. We could only give them certain number of options that they could use, as very static experience,” said Jose Diaz-Gonzalez, a staff engineer at SeatGeek, in this episode of The New Stack Makers podcast. “If they want to scale, an application required manual toil on the platform team side, and then they can do some work. And so for us, we wanted to expose more of the platform to engineers and allow them to have more control over what it is that they were shipping, how that runtime environment was executed, and how they scale their applications.” This On the Road episode of Makers, recorded here during HashiConf, HashiCorp’s annual user conference, featured a case study of SeatGeek’s adoption of Nomad and the HashiCorp Cloud Platform. The conversation was hosted by Heather Joslyn, features editor of TNS. This episode was sponsored by HashiCorp. Nomad vs. Kubernetes: Trade-Offs SeatGeek essentially runs the back office for ticket sales for its partners, including Broadway productions and NFL teams like Dallas Cowboys, providing them with “something like a software as a service,” said Diaz-Gonzalez. “All of those installations, they're single tenant, but they run roughly the same way for every single customer. And then on the consumer side we run a ton of different services and microservices and that sort of thing.” Though the workloads run in different languages or on different frameworks, he said, they are essentially homogeneous in their deployment patterns; SeatGeek deploys to Windows and Linux containers on the enterprise side, and to Linux on the consumer, and deploys to both the U.S. and European Union regions. It began using Nomad to give developers more control over their applications; previously, the deployment experience had been very constrained, Diaz-Gonzalez said, resulting in what he called “a very static experience. “To scale an application required manual toil on the platform team side, and then they can do some work,” he said. “And so for us, we wanted to expose more of the platform to engineers and allow them to have more control over what it is that they were shipping, how that how that runtime environment was executed and how they scale their applications.” Now, he said, SeatGeek uses Nomad ‘to provide basically the entire orchestration layer for our deployments Foregoing Kubernetes (K8s) does have its drawbacks. The cloud native ecosystem is largely built around products meant to run with K8s, rather than Nomad. The ecosystem built around HashiCorp’s product is “a much smaller community. If we need support, we lean heavily on HashiCorp Enterprise. And we're willing, on the support team, to answer questions. But if we need support on making some particular change, or using some certain feature, we might be one of the few people starting to use that feature.” “That said, it's much easier for us to manage and support Nomad and its integration with the rest of our platform, because it's so simple to run.” To learn more about SeatGeek’s cloud journey and the challenges it faced — such as dealing with security and policy — check out the full episode. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.


