

The New Stack Podcast
The New Stack
The New Stack Podcast is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software.
For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack
For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack
Episodes
Mentioned books

Nov 30, 2022 • 16min
ML Can Prevent Getting Burned For Kubernetes Provisioning
In the rush to create, provision and manage Kubernetes, often left out is proper resource provisioning. According to StormForge, a company paying, for example, a million dollars a month on cloud computing resources is likely wasting $6 million a year of resources on the cloud on Kubernetes that are left unused. The reasons for this are manifold and can vary. They include how DevOps teams can tend to estimate too conservatively or aggressively or overspend on resource provisioning. In this podcast with StormForge’s Yasmin Rajabi, vice president of product management, and Patrick Bergstrom CTO, we look at how to properly provision Kubernetes resources and the associated challenges. The podcast was recorded live in Detroit during KubeCon + CloudNativeCon Europe 2022. Rethinking Web Application Firewalls Almost ironically, the most commonly used Kubernetes resources can even complicate the ability to optimize resources for applications.The processes typically involve Kubernetes resource requests and limits, and predicting how the resources might impact quality of service for pods. Developers deploying an application on Kubernetes often need to set CPU-request, memory-request and other resource limits. “They are usually like ‘I don't know — whatever was there before or whatever the default is,’” Rajabi said. “They are in the dark.” Sometimes, developers might use their favorite observability tool and say “‘we look where the max is, and then take a guess,’” Rajabi said. “The challenge is, if you start from there when you start to scale that out — especially for organizations that are using horizontal scaling with Kubernetes — is that then you're taking that problem and you're just amplifying it everywhere,” Rajabi said. “And so, when you've hit that complexity at scale, taking a second to look back and ‘say, how do we fix this?’ you don't want to just arbitrarily go reduce resources, because you have to look at the trade off of how that impacts your reliability.” The process then becomes very hit or miss. “That's where it becomes really complex, when there are so many settings across all those environments, all those namespaces,” Rajabi said. “It's almost a problem that can only be solved by machine learning, which makes it very interesting.” But before organizations learn the hard way about not automating optimizing deployments and management of Kubernetes, many resources — and costs — are bared to waste. “It's one of those things that becomes a bigger and bigger challenge, the more you grow as an organization,” Bergstrom said. Many StormForge customers are deploying into thousands of namespaces and thousands of workloads. “You are suddenly trying to manage each workload individually to make sure it has the resources and the memory that it needs,” Bergstrom said. “It becomes a bigger and bigger challenge.” The process should actually be pain free, when ML is properly implemented. With StormForge’s partnership with Datadog, it is possible to apply ML to collect historical data, Bergstrom explained. “Then, within just hours of us deploying our algorithm into your environment, we have machine learning that's used two to three weeks worth of data to train that can then automatically set the correct resources for your application. This is because we know what the application is actually using,” Bergstrom said. “We can predict the patterns and we know what it needs in order to be successful.” Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Nov 29, 2022 • 27min
What’s the Future of Feature Management?
Feature management isn’t a new idea but lately it’s a trend that’s picked up speed. Analysts like Forrester and Gartner have cited adoption of the practice as being, respectively, “hot” and “the dominant approach to experimentation in software engineering.” A study released in November found that 60% of 1,000 software and IT professionals surveyed started using feature flags only in the past year, according to the report sponsored by LaunchDarkly, the feature management platform and conducted by Wakefield Research. At the heart of feature management are feature flags, which give organizations the ability to turn features on and off, without having to re-deploy an entire app. Feature flags allow organizations test new features, and control things like access to premium versions of a customer-facing service. An overall feature management practice that includes feature flags allows organizations “to release progressively any new feature to any segment of users, any environment, any cohort of customers in a controlled manner that really reduces the risk of each release,” said Ravi Tharisayi, senior director of product marketing at LaunchDarkly, in this episode of The New Stack Makers podcast. Tharisayi talked to The New Stack’s features editor, Heather Joslyn, about the future of feature management, on the eve of the company’s latest Trajectory user conference. This episode of Makers was sponsored by LaunchDarkly.Streamlining Management, Saving MoneyThe participants in the new survey worked at companies of at least 200 employees, and nearly all of them that use feature flags — 98%— said they believe they save their organizations money and demonstrate a return on investment. Furthermore, 70% said that their company views feature management as either a mission-critical or a high-priority investment. Fielding the annual survey, Tharisayi said, has offered a window into how organizations are using feature flags. Fifty-five percent of customers in the 2022 survey said they use feature flags as long-term operational controls — for API rate limiting, for instance, to prioritize certain API calls in high-traffic situations. The second most common use, the survey found — cited by 47% of users — was for entitlements, “managing access to different types of plans, premium plans versus other plans, for example,” Tharisayi said. “This is really a powerful capability because of this ability to allow product managers or other personas to manage who has access to certain features to certain plans, without having to have developers be involved,” he said. “Previously, that required a lot of developer involvement.”Experimentation, Metrics, Cultural ShiftsLaunchDarkly, Tharisayi said, has been investing in and improving its platform’s experimentation and measurement capabilities: “At the core of that is this notion that experimentation can be a lot more successful when it's tightly integrated to the developer workflow.” As an example, he pointed to CCP Games, makers of the gaming platform EVE Online, which serves millions of players. “They were recently thinking through how to evolve their recommendation engine, because they wanted this engine to recommend actions for their gamers that will hopefully increase their ultimate North Star metric,” its tracking of how much time gamers spend with their games. By using LaunchDarkly’s platform, CCP was able to run A/B tests and increase gamers’ session lengths and engagement. ”So that's the kind of capability that we think is going to be an increasing priority,” Tharisayi said. As feature management matures and standardizes, he said, he pointed to the adoption of DevOps as a model and cautionary tale. ”When it comes to cultural shifts, like DevOps or feature management that require teams to work in a different way, oftentimes there can be early success with a small team,” Tharisayi said “But then there can be some cultural and process barriers as you're trying to standardize to the team level and multi-team level, before figuring out the kinks in deploying it at an organization-wide level.” He added, “that's one of the trends that we observed a little bit in this survey, is that there are some cultural elements to getting success at scale, with something like feature management and the opportunity as an industry to support organizations as they're making that quest to standardize a practice like this, like any other cultural practice.” Check out the full episode for more on the survey and on what’s next for feature management. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Nov 23, 2022 • 16min
Chronosphere Nudges Observability Standards Toward Maturity
DETROIT — Rob Skillington’s grandfather was a civil engineer, working in an industry that, in over a century, developed processes and know-how that enabled the creation of buildings, bridges and road. “A lot of those processes matured to a point where they could reliably build these things,” said Skillington, co-founder and chief technology officer at Chronosphere, an observability platform. “And I think about observability as that same maturity of engineering practice. When it comes to building software that actually is useful in the world, it is this process that helps you actually achieve the deployment and operation of these large scale systems that we use every day.” Skillington spoke about the evolution of observability, and his company’s recent donation of an open source project to Prometheus, in this episode of The New Stack Makers podcast. Heather Joslyn, features editor of TNS, hosted the conversation. This On the Road edition of The New Stack Makers was recorded at KubeCon + CloudNativeCon North America, in the Motor City. The episode was sponsored by Chronosphere.A Donation to the Prometheus ProjectHelping observability practices grow as mature and reliable as civil engineering rules that help build sturdy skyscrapers is a tough task, Skillington suggested. In the cloud era, he said, “you have to really prepare the software for a whole set of runtime environments. And so the challenges around that is really about making it consistent, well understood and robust.” At KubeCon in late October, Chronosphere and PromLabs (founded by Julius Volz, creator of Prometheus) announced that they had donated their open source project PromLens to the Prometheus project, the open source monitoring and alerts primitive. The donation is a way of placing a bet on a tool that integrates well with Kubernetes. “There's this real yearning for essentially a standard that can be built upon by everyone in the industry, when it comes to these core primitives, essentially,” Skillington said. “And Prometheus is one of those primitives. We want to continue to solidify that as a primitive that stands the test of time.” “We can't build a self-driving car if we're always building a different car,” he added. PromLens builds Prometheus queries in a sort of integrated development environment (IDE), Skillington said. It also makes it easier for more people in an organization to create queries and understand the meaning and seriousness of alerts. The PromLens tool breaks queries into a visual format, and allows users to edit them through a UI. “Basically, it's kind of like a What You See Is What You Get editor, or WYSIWYG editor, for Prometheus queries,” Skillington said. “Some of our customers have tens of thousands of these alerts to find in PromQL, which is the query language for Prometheus,” he noted. “Having a tool like an integrated development environment — where you can really understand these complex queries and iterate faster on, setting these up and getting back to your day job — is incredibly important.” Check out the full episode for more on PromLens and the current state of observability. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Nov 23, 2022 • 12min
How Boeing Uses Cloud Native
In this latest podcast from The New Stack, we spoke with Ricardo Torres, who is the chief engineer of open source and cloud native for aerospace giant Boeing. Torres also joined the Cloud Native Computing Foundation in May to serve as a board member. In this interview, recorded at KubeCon+CloudNativeCon last month, Torres speaks about Boeing's use of open source software, as well as its adoption of cloud native technologies. While we may think of Boeing as an airplane manufacturer, it would be more accurate to think of the company as a large-scale system integrator, one that uses a lot of software. So, like other large-scale companies, Boeing sees a distinct advantage in maintaining good relations with the open source community. "Being able to leverage the best technologists out there in the rest of the world is of great value to us strategically," Torres said. This strategy allows Boeing to "differentiate on what we do as our core business rather than having to reinvent the wheel all the time on all of the technology." Like many other large companies, Boeing has created an open source office to better work with the open source community. Although Boeing is primarily a consumer of open source software, it still wants to work with the community. "We want to make sure that we have a strategy around how we contribute back to the open source community, and then leverage those learnings for inner sourcing," he said. Boeing also manages how it uses open source internally, keeping tight controls on the supply chain of open source software it uses. "As part of the software engineering organization, we partner with our internal IT organization, to look at our internet traffic and assure nobody's going out and downloading directly from an untrusted repository or registry. And then we host instead, we have approved sources internally." It's not surprising that Boeing, which deals with a lot of government agencies, embraces the practice of using software bills of material (SBOMs), which provide a full listing of what components are being used in a software system. In fact, the company has been working to extend the comprehensiveness of SBOMs, according to Torres. " I think one of the interesting things now is the automation," he said of SBOMs. "And so we're always looking to beef up the heuristics because a lot of the tools are relatively naïve, and that they trust that the dependencies that are specified are actually representative of everything that's delivered. And that's not good enough for a company like Boeing. We have to be absolutely certain that what's there is exactly what did we expected to be there."Cloud Native ComputingWhile Boeing builds many systems that reside in private data centers, the company is also increasingly relying on the cloud as well. Earlier this year, Boeing had signed agreements with the three largest cloud service providers (CSPs): Amazon Web Services, Microsoft Azure and the Google Cloud Platform. "A lot of our cloud presence is about our development environments. And so, you know, we have cloud-based software factories that are using a number of CNCF and CNCF-adjacent technologies to enable our developers to move fast," Torres said. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Nov 22, 2022 • 14min
Case Study: How Dell Technologies Is Building a DevRel Team
DETROIT — Developer relations, or DevRel to its friends, is not only a coveted career path but also essential to helping developers learn and adopt new technologies. That guidance is a matter of survival for many organizations. The cloud native era demands new skills and new ways of thinking about developers and engineers’ day-to-day jobs. At Dell Technologies, it meant responding to the challenges faced by its existing customer base, which is “very Ops centric — server admins, system admins,” according to Brad Maltz, of Dell. With the rise of the DevOps movement, “what we realized is our end users have been trying to figure out how to become infrastructure developers,” said Maltz, the company’s senior director of DevOps portfolio and DevRel. “They've been trying to figure out how to use infrastructure as code Kubernetes, cloud, all those things.” “And what that means is we need to be able to speak to them where they want to go, when they want to become those developers. That’s led us to build out a developer relations program ... and in doing that, we need to grow out the community, and really help our end users get to where they want to.” In this episode of The New Stack’s Makers podcast, Maltz spoke to Heather Joslyn, TNS features editor, about how Dell has, since August, been busy creating a DevRel team to aid its enterprise customers seeking to adopt DevOps as a way of doing business. This On the Road edition of Makers, recorded at KubeCon + CloudNativeCon North America in the Motor City, was sponsored by Dell Technologies. Recruiting Influencers Maltz, an eight-year veteran of Dell, has moved quickly in assembling his team, with three hires made by late October and a fourth planned before year’s end. That’s lightning fast, especially for a large, established company like Dell, which was founded in 1984. “There's two ways of building a DevOps team,” he said. “One way is to actually kind of go and try to homegrow people on the inside and get them more presence in the community. That's the slower road. “But we decided we have to go and find industry influencers that believe in our cause, that believe in the problem space that we live in. And that's really how we started this: we went out to find some very, very strong top talent in the industry and bring them on board.” In addition to spreading the DevOps solutions gospel at conferences like KubeCon, Maltz’s vision for the team is currently focused on social media and building out a website, developer.dell.com, which will serve as the landing page for the company’s DevRel knowledge, including links to community, training, how-to videos and an API marketplace. In building the team, the company made an unorthodox choice. “We decided to put Dev Rel into product management on the product side, not marketing,” Maltz said. “The reason we did that was we want the DevRel folks to really focus on community contributions, education, all that stuff. “But while they're doing that, their job is to bring the data back from those discussions they're having in the field back to product management, to enable our tooling to be able to satisfy some of those problems that they're bringing back so we can start going full circle.” Facing the Limits of ‘Shift Left’ The roles that Dell’s DevRel team is focusing on in the DevOps culture are site reliability engineers (SREs) and platform engineers. These not only align with its traditional audience of Ops engineers, but reflect a reality Dell is seeing in the wider tech world. “The reality is, application developers don't want to shift left, they don't want to operate. They don't want they want somebody else to take it, and they want to keep developing,” Maltz said. “where DevOps has transitioned for us is, how do we help those people that are kind of that operator turning into infrastructure developer fit into that DevOps culture?” The rise of platform engineering, he suggested, is a reaction to the endless choices of tools available to developers these days. “The notion is developers in the wild are able to use any tool on any cloud with any language, and they can do whatever they want. That's hard to support,” he said. “That's where DevOps got introduced, and was to basically say, Hey, we're gonna put you into a little bit of a box, just enough of a box that we can start to gain control and get ahead of the game. The platform engineering team, in this case, they're the ones in charge of that box.” But all of that, Maltz said, doesn’t mean that “shift left” — giving devs greater responsibility for their applications — is dead. It simply means most organizations aren’t ready for it yet: “That will take a few more years of maturity within these DevOps operating models, and other things that are coming down the road.” Check out the full episode for more from Maltz, including new solutions from Dell aimed at platform engineers and SREs and collaborations with Red Hat OpenShift. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

4 snips
Nov 17, 2022 • 31min
Kubernetes and Amazon Web Services
Cloud giant Amazon Web Services manages the largest number of Kubernetes clusters in the world, according to the company. In this podcast recording, AWS Senior Engineer Jay Pipes discusses AWS' use of Kubernetes, as well as the company's contribution to the Kubernetes code base. The interview was recorded at KubeCon North America last month.The Difference Between Kubernetes and AWSKubernetes is an open source container orchestration platform. AWS is one of the largest providers of cloud services. In 2021, the company generated $61.1 billion in revenue, worldwide. AWS provides a commercial Kubernetes service, called the Amazon Elastic Kubernetes Service (EKS). It simplifies the Kubernetes experience by adding a control plane and worker nodes. In addition to providing a commercial Kubernetes service, AWS supports the development of Kubernetes, by dedicating engineers to the work on the open source project. "It's a responsibility of all of the engineers in the service team to be aware of what's going on and the upstream community to be contributing to that upstream community, and making it succeed," Pipes said. "If the upstream open source projects upon which we depend are suffering or not doing well, then our service is not going to do well. And by the same token, if we can help that upstream project or project to be successful, that means our service is going to be more successful."What is Kubernetes in AWS?In addition to EKS, AWS has also a number of other tools to help Kubernetes users. One is Karpenter, an open-source, flexible, high-performance Kubernetes cluster autoscaler built with AWS. Karpenter provides more fine-grained scaling capabilities, compared to Kubernetes' built-in Cluster Autoscaler, Pipes said. Instead of using Cluster Autoscaler, Karpenter deploys AWS' own Fleet API, which offers superior scheduling capabilities. Another tool for Kubernetes users is cdk8s, which is an open-source software development framework for defining Kubernetes applications and reusable abstractions using familiar programming languages and rich object-oriented APIs. It is similar to the AWS Cloud Development Kit (CDK), which helps users deploy applications using AWS CloudFormation, but instead of the output being a CloudFormation template, the output is a YAML manifest that can be understood by Kubernetes.AWS and KubernetesIn addition to providing open source development help to Kubernetes, AWS has offered to help defray the considerable expenses of hosting the Kubernetes development and deployment process. Currently, the Kubernetes upstream build process is hosted on the Google Cloud Platform, and artifact registry is hosted in Google's container registry, and totals about 1.5TB worth of storage. Each month, AWS alone was paying $90-$100,000 a month for egress costs, just to have the Kubernetes code on an AWS-hosted infrastructure, Pipes said. AWS has been working on a mirror of the Kubernetes assets that would reside on the company's own cloud servers, thereby eliminating the Google egress costs typically borne by the Cloud Native Computing Foundation. "By doing that we completely eliminate the egress costs out of Google data centers and into AWS data centers," Pipes said. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Nov 16, 2022 • 13min
Case Study: How SeatGeek Adopted HashiCorp’s Nomad
LOS ANGELES — Kubernetes, the open source container orchestrator, may have a big footprint in the cloud native world, but some organizations are doing just fine without it. Take, for example, SeatGeek, which runs a mobile application that serves as a primary and secondary market for event tickets. For cloud infrastructure, the 12-year-old company’s workloads — which include non-containerized applications — have largely run on Amazon Web Services. A few years ago, it turned to HashiCorp’s Nomad, a scheduler built for running for apps whether they’re containerized or not. “In the beginning, we had a platform that an engineer would deploy something to but it was very constrained. We could only give them certain number of options that they could use, as very static experience,” said Jose Diaz-Gonzalez, a staff engineer at SeatGeek, in this episode of The New Stack Makers podcast. “If they want to scale, an application required manual toil on the platform team side, and then they can do some work. And so for us, we wanted to expose more of the platform to engineers and allow them to have more control over what it is that they were shipping, how that runtime environment was executed, and how they scale their applications.” This On the Road episode of Makers, recorded here during HashiConf, HashiCorp’s annual user conference, featured a case study of SeatGeek’s adoption of Nomad and the HashiCorp Cloud Platform. The conversation was hosted by Heather Joslyn, features editor of TNS. This episode was sponsored by HashiCorp. Nomad vs. Kubernetes: Trade-Offs SeatGeek essentially runs the back office for ticket sales for its partners, including Broadway productions and NFL teams like Dallas Cowboys, providing them with “something like a software as a service,” said Diaz-Gonzalez. “All of those installations, they're single tenant, but they run roughly the same way for every single customer. And then on the consumer side we run a ton of different services and microservices and that sort of thing.” Though the workloads run in different languages or on different frameworks, he said, they are essentially homogeneous in their deployment patterns; SeatGeek deploys to Windows and Linux containers on the enterprise side, and to Linux on the consumer, and deploys to both the U.S. and European Union regions. It began using Nomad to give developers more control over their applications; previously, the deployment experience had been very constrained, Diaz-Gonzalez said, resulting in what he called “a very static experience. “To scale an application required manual toil on the platform team side, and then they can do some work,” he said. “And so for us, we wanted to expose more of the platform to engineers and allow them to have more control over what it is that they were shipping, how that how that runtime environment was executed and how they scale their applications.” Now, he said, SeatGeek uses Nomad ‘to provide basically the entire orchestration layer for our deployments Foregoing Kubernetes (K8s) does have its drawbacks. The cloud native ecosystem is largely built around products meant to run with K8s, rather than Nomad. The ecosystem built around HashiCorp’s product is “a much smaller community. If we need support, we lean heavily on HashiCorp Enterprise. And we're willing, on the support team, to answer questions. But if we need support on making some particular change, or using some certain feature, we might be one of the few people starting to use that feature.” “That said, it's much easier for us to manage and support Nomad and its integration with the rest of our platform, because it's so simple to run.” To learn more about SeatGeek’s cloud journey and the challenges it faced — such as dealing with security and policy — check out the full episode. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Nov 15, 2022 • 18min
OpenTelemetry Properly Explained and Demoed
OpenTelemetry project offers vendor-neutral integration points that help organizations obtain the raw materials — the "telemetry" — that fuel modern observability tools, and with minimal effort at integration time. But what does OpenTelemetry mean for those who use their favorite observability tools but don’t exactly understand how it can help them? How might OpenTelemetry be relevant to the folks who are new to Kuberentes (the majority of KubeCon attendees during the past years) and those who are just getting started with observability? Austin Parker, head of developer relations, Lightstep and Morgan McLean, director of product management, Splunk, discuss during this podcast at KubeCon + CloudNativeCon 2022 how the OpenTelemetry project has created demo services to help cloud native community members better understand cloud native development practices and test out OpenTelemetry, as well as Kubernetes, observability software, etc. At this conjecture in DevOps history, there has been considerable hype around observability for developers and operations teams, and more recently, much attention has been given to helping combine the different observability solutions out there in use through a single interface, and to that end, OpenTelemetry has emerged as a key standard. DevOps teams today need OpenTelemetry since they typically work with a lot of different data sources for observability processes, Parker said. “If you want observability, you need to transform and send that data out to any number of open source or commercial solutions and you need a lingua franca to to be consistent. Every time I have a host, or an IP address, or any kind of metadata, consistency is key and that's what OpenTelemetry provides.” Additionally, as a developer or an operator, OpenTelemetry serves to instrument your system for observability, McLean said. “OpenTelemetry does that through the power of the community working together to define those standards and to provide the components needed to extract that data among hundreds of thousands of different combinations of software and hardware and infrastructure that people are using,” McLean said. Observability and OpenTelemetry, while conceptually straightforward, do require a learning curve to use. To that end, the OpenTelemetry project has released a demo to help. It is intended to both better understand cloud native development practices and to test out OpenTelemetry, as well as Kubernetes, observability software, etc.,the project’s creators say. OpenTelemetry Demo v1.0 general release is available on GitHub and on the OpenTelemetry site. The demo helps with learning how to add instrumentation to an application to gather metrics, logs and traces for observability. There is heavy instruction for open source projects like Prometheus for Kubernetes and Jaeger for distributed tracing. How to acquaint yourself with tools such as Grafana to create dashboards are shown. The demo also extends to scenarios in which failures are created and OpenTelemetry data is used for troubleshooting and remediation. The demo was designed for the beginner or the intermediate level user, and can be set up to run on Docker or Kubernetes in about five minutes. “The demo is a great way for people to get started,” Parker said. “We've also seen a lot of great uptake from our commercial partners as well who have said ‘we'll use this to demo our platform.’” Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Nov 10, 2022 • 16min
The Latest Milestones on WebAssembly's Road to Maturity
DETROIT — Even in the midst of hand-wringing at KubeCon + CloudNativeCon North America about how the global economy will make it tough for startups to gain support in the near future, the news about a couple of young WebAssembly-centric companies was bright. Cosmonic announced that it had raised $8.5 million in a seed round led by Vertex Ventures. And Fermyon Technologies unveiled both funding and product news: a $20 million A Series led by Insight Partners (which also owns The New Stack) and the launch of Fermyon Cloud, a hosted platform for running WebAssembly (Wasm) microservices. Both Cosmonic and Fermyon were founded in 2021. “A lot of people think that Wasm is this maybe up and coming thing, or it's just totally new thing that's out there in the future,” noted Bailey Hayes, a director at Cosmonic, in this episode of The New Stack Makers podcast. But the future is already here, she said: “It's one of technology's best kept secrets, because you're using it today, all over. And many of the applications that we use day-to-day — Zoom, Google Meet, Prime Video, I mean, it really is everywhere. The thing that's going to change for developers is that this will be their compilation target in their build file.” In this On the Road episode of Makers, recorded at KubeCon here in the Motor City, Hayes and Kate Goldenring, a software engineer at Fermyon, spoke to Heather Joslyn, TNS’ features editor, about the state of WebAssembly. This episode was sponsored by the Cloud Native Computing Foundation (CNCF). Wasm and Docker, Java, Python WebAssembly – the roughly five-year-old binary instruction format for a stack-based virtual machine, is designed to execute binary code on the web, lets developers bring the performance of languages like C, C++, and Rust to the web development area. At Wasm Day, a co-located event that preceded KubeCon, support for a number of other languages — including Java, .Net, Python and PHP — was announced. At the same event, Docker also revealed that it has added Wasm as a runtime that developers can target; that feature is now in beta. Such steps move WebAssembly closer to fulfilling its promise to devs that they can “build once, run anywhere.” “With Wasm, developers shouldn't need to know necessarily that it's their compilation target,” said Hayes. But, she added, “what you do know is that you're now able to move that Wasm module anywhere in any cloud. The same one that you built on your desktop that might be on Windows can go and run on an ARM Linux server.” Goldenring pointed to the findings of the CNCF’s “mini survey” of WebAssembly users, released at Wasm Day, as evidence that the technology’s user cases are proliferating quickly. “Even though WebAssembly was made for the web, the number one response —it was around a little over 60% — said serverless,” she noted. “And then it said, the edge and then it said web development, and then it said IoT, and the use cases just keep going. And that's because it is this incredibly powerful, portable target that you can put in all these different use cases. It's secure, it has instant startup time.” Worlds and Warg Craft The podcast guests talked about recent efforts to make it easier to use Wasm, share code and reuse it, including the development of the component model, which proponents hope will simplify how WebAssembly works outside the browser. Goldenring and Hayes discussed efforts now under construction, including “worlds” files and Warg, a package registry for WebAssembly. (Hayes co-presented at Wasm Day on the work being done on WebAssembly package management, including Warg.) A world file, Hayes said, is a way of defining your environment. "One way to think of it is like .profile, but for Wasm, for a component. And so it tells me what types of capabilities I need for my web module to run successfully in the runtime and can read that and give me the right stuff.” And as for Warg, Hayes said: “It's really a protocol and a set of APIs, so that we can slot it into existing ecosystems. A lot of people think of it as us trying to pave over existing technologies. And that's really not the case. The purpose of Warg is to be able to slot right in, so that you continue working in your current developer environment and experience and using the packages that you're used to. But get all of the advantages of the component model, which is this new specification we've been working on" at the W3C's WebAssembly Working Group. Goldenring added another finding from the CNCF survey: “Around 30% of people wanted better code reuse. That's a sign of a more mature ecosystem. So having something like Warg is going to help everyone who's involved in the server side of the WebAssembly space.” Listen to the full conversation to learn more about WebAssembly and how these two companies are tackling its challenges for developers. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Nov 9, 2022 • 14min
Zero Trust Security and the HashiCorp Cloud Platform
Organizations are now, almost by default, now becoming multi-cloud operations. No cloud service offers the full breadth of what an enterprise may need, and enterprises themselves find themselves using more than one service, often inadvertently. HashiCorp is one company preparing enterprises for the challenges with managing more than a single cloud, through the use of a coherent set of software tools. To learn more, we spoke with Megan Laflamme, HashiCorp director of product marketing, at the HashiConf user conference, for this latest episode of The New Stack Makers podcast. We talked about zero trust computing, the importance identity and the general availability of HashiCorp Boundary single sign-on tool. "In the cloud operating model, the [security] perimeter is no longer static, and you move to a much more dynamic infrastructure environment," she explained.What is the HashiCorp Cloud Platform?The HashiCorp Cloud Platform (HCP) is a fully-managed platform offering HashiCorp software including Consul, Vault, and other services, all connected through HashiCorp Virtual Networks (HVN). Through a web portal or by Terraform, HCP can manage log-ins, access control, and billing across multiple cloud assets. The HashiCorp Cloud Platform now offers the ability to do single sign-on, reducing a lot of the headache of signing into multiple applications and services.What is HashiCorp Boundary?Boundary is the client that enables this “secure remote access” and is now generally available to users of the platform. It is a remote access client that manages fine-grained authorizations through trusted identities. It provides the session connection, establishment, and credential issuance and revocation. "With Boundary, we enable a much more streamlined workflow for permitting access to critical infrastructure where we have integrations with cloud providers or service registries," Laflamme said. The HCP Boundary is a fully managed version of HashiCorp Boundary that is run on the HashiCorp Cloud. With Boundary, the user signs on once, and everything else is handled beneath the floorboards, so to speak. Identities for applications, networks, and people are handled through HashiCorp Vault and HashiCorp Consul. Every action is authorized and documented. Boundary authenticates and authorizes users, by drawing on existing identity providers (IDPs) such as Okta, Azure Active Directory, and GitHub. Consul authenticates and authorizes access between applications and services. This way, networks aren’t exposed, and there is no need to issue and distribute credentials. Dynamic credential injection for user sessions is done with HashiCorp Vault, which injects single-use credentials for passwordless authentication to the remote host.What is Zero Trust Security?With zero trust security, users are authenticated at the service level, rather than through a centralized firewall, which becomes increasingly infeasible in multicloud designs. In the industry, there is a shift “from high trust IP based authorization in the more static data centers and infrastructure, to the cloud, to a low trust model where everything is predicated on identity,” Laflamme explained. This approach does require users to sign on to each individual service, in some form, which can be a headache to those (i.e. developers and system engineers) who sign on to a lot of apps in their daily routine. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.


