

The New Stack Podcast
The New Stack
The New Stack Podcast is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software.
For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack
For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack
Episodes
Mentioned books

Feb 16, 2023 • 19min
2023 Hotness: Cloud IDEs, Web Assembly, and SBOMs
Here's a breakdown of what we cover: Cloud IDEs will mature as GitHub's Codespaces platform gains acceptance through its integration into the GitHub service. Other factors include new startups in the space, such as GitPod, which offers a secure, cloud-based IDE, and Uptycs, which uses telemetry data to lock-down developer environments. "So I think you'll, you're just gonna see more people exposed to it, and they're gonna be like, 'holy crap, this makes my life a lot easier '." FinOps reflects the more stringent views on managing costs, focusing on the efficiency of resources that a company provides for developers. The focus also translates to the GreenOps movement with its emphasis on efficiency. Software bill of materials (SBOMs) will continue to mature with Sigstore as the project with the fastest expected adoption. Witness, from Telemetry Project, is another project. The SPDX community has been at the center of the movement for over a decade now before people cared about it. GitOps and Open Telemetry: This year, KubeCon submissions topics on GitOps were super high. OpenTelemetry is the second most popular project in the CNCF, behind Kubernetes. Platform engineering is hot. Anisczyk cites Backstage, a CNCF project, as one he is watching. It has a healthy plugin extension ecosystem and a corresponding large community. People make fun of Jenkins, but Jenkins is likely going to be around as long as Linux because of the plugin community. Backstage is going along that same route. WebAssembly: "You will probably see an uptick in edge cases, like smaller deployments as opposed to full-blown cloud-based workloads. Web Assembly will mix with containers and VMs. "It's just the way that software works." Kubernetes is part of today's distributed fabric. Linux is now everywhere. Kubernetes is going through the same evolution. Kubernetes is going into airplanes, cars, and fast-food restaurants. "People are going to focus on the layers up top, not necessarily like, the core Kubernetes project itself. It's going to be all the cool stuff built on top." Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Feb 9, 2023 • 23min
Generative AI: Don't Fire Your Copywriters Just Yet
Everyone in the community was surprised by ChatGPT last year, which a web service responded to any and all user questions with a surprising fluidity. ChatGPT is a variant of the powerful GPT-3 large language model created by OpenAI, a company owned by Microsoft. It is still a demo though it is pretty clear that this type of generative AI will be rapidly commercialized. Indeed Microsoft is embedding the generative AI in its Bing Search service, and Google is building a rival offering. So what are smaller businesses to do to ensure their messages are heard to these machine learning giants? For this latest podcast from The New Stack, we discussed these issues with Ryan Johnston, chief marketing officer for Writer. Writer has enjoyed an early success in generative AI technologies. The company's service is dedicated to a single mission: making sure its customers' content adheres to the guidelines set in place. This can include features such as ensuring the language in the copy matches the company's own designated terminology, or making sure that a piece of content covers all the required topic points, or even that a press release has quotes that are not out of scope with the project mission itself. In short, the service promises "consistently on-brand content at scale," Johnston said. "It's not taking away my creativity. But it is doing a great job of figuring out how to create content for me at a faster pace, [content] that actually sounds like what I want it to sound like." For our conversation, we first delved into how the company was started, its value proposition ("what is it used for?") and what role that AI plays in the company's offering. We also delve a bit into the technology stack Writer deploys to offer these services, as well as what material the Writer may require from their customers themselves to make the service work. For the second part of our conversation, we turn our attention to how other companies (that are not search giants) can get their message across in the land of large language models, and maybe even find a few new sources of AI-generated value along the way. And, for those public-facing businesses dealing with Google and Bing, we chat about how they should they refine their own search engine optimization (SEO) strategies to be best represented in these large models? One point to consider: While AI can generate a lot of pretty convincing text, you still need a human in the loop to oversee the results, Johnston advised. "We are augmenting content teams copywriters to do what they do best, just even better. So we're scaling the mundane parts of the process that you may not love. We are helping you get a first draft on paper when you've got writer's block," Johnston said. "But at the end of the day, our belief is there needs to be a great writer in the driver's seat. [You] should never just be fully reliant on AI to produce things that you're going to immediately take to market." Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Feb 2, 2023 • 27min
Feature Flags are not Just for Devs
The story goes something like this: There's this marketing manager who is trying to time a launch. She asks the developer team when the service will be ready. The dev team says maybe a few months. Let's say three months from now in April. The marketing manager begins prepping for the release. The dev team releases the services the following week. It's not an uncommon occurrence. Edith Harbaugh is the co-founder and CEO of LaunchDarkly, a company she launched in 2014 with John Kodumal to solve these problems with software releases that affect organizations worldwide. Today, LaunchDarkly has 4,000 customers and an annual return revenue rate of $100 million. We interviewed Harbaugh for our Tech Founder Odyssey series on The New Stack Makers about her journey and LaunchDarkly's work. The interview starts with this question about the timing of dev releases and the relationship between developers and other constituencies, particularly the marketing organization. LaunchDarkly is the number one feature management company, Harbaugh said. "Their mission is to provide services to launch software in a measured, controlled fashion. Harbaugh and Kodumal, CTO, founded the company on the premise that software development and releasing software is arduous. "You wonder whether you're building the right thing," Harbaugh said, who has worked as both an engineer and a product manager. "Once you get it out to the market, it often is not quite right. And then you just run this huge risk of how do you fix things on the fly." Feature flagging was a technique that a lot of software companies did. Harbaugh worked at Tripit, a travel service, where they used feature flags as did companies such as Atlassian, where Kodumal had developed software. "So the kernel of LaunchDarkly, when we started in 2014, was to make this technique of feature flagging into a movement called feature management, to allow everybody to build better software faster, in a safer way." LaunchDarkly allows companies to release features however granular an organization wants, allowing a developer to push a release into production in different pieces at different times, Harbaugh said. So, a marketing organization can send a release out even after the developer team has released it into production. "So, for example, if, we were running a release, and we wanted somebody from The New Stack to see it first, the marketing person could turn it on just for you." Harbaugh describes herself as a huge geek. But she also gets it in a rare way for geeks and non-geeks alike. She and Kodumal took a concept used effectively by develops, transforming it into a service that provides feature management for a broader customer base, like the marketer wanting to push releases out in a granular way for a launch on the East Coast that is pre-programmed with feature flags in advance from the company office the previous day in San Francisco. The idea is novel, but like many intelligent, technical founders, Harbaugh's journey reflects her place today. She's a leader in the space, and a fun person to talk to, so we hope you enjoy this latest episode in our tech founder series from The New Stack Makers. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Jan 25, 2023 • 21min
Port: Platform Engineering Needs a Holistic Approach
By now, almost everyone agreed platform engineering is probably a good idea, in which an organizations builds an internal development platform to empower coders and speed application releases. So, for this latest edition of The New Stack podcast, we spoke with one of the pioneers in this space, Zohar Einy, CEO of Port, to see how platform engineering would work in your organization. TNS Editor Joab Jackson hosted this conversation. Port offers what it claims is the world's first low code platform for developers. Rethinking Web Application Firewalls With Port, an organization can build a software catalogue of approved tools, import its own data model, and set up workflows. Developers can consume all the resources they need through a self-service catalogue, without needing the knowledge how to set up a complex application, like Kubernetes. The DevOps and platform teams themselves maintain the platform. Application owners aren't the only potential users of a self-service catalogues, Einy points out in our convo. DevOps and system administration teams can also use the platform. A DevOps teams can set up automations "to make sure that [developers are] using the platform with the right mindset that fits with their organizational standards in terms of compliance, security, and performance aspects." Even machines themselves could benefit from a self-service platform, for those who are looking to automate deployments as much as possible. Einy offered an example: A CI/CD process could create a build process on its own. If it needs to check the maturity level of some tool, it can do so through an API call. If it's not adequately certified, the developer is notified, but if all the tools are sufficiently mature than the automated process can finish the build without further developer intervention. Another possible process that could be automated would be the termination of permissions when their deadline has passed. Think about an early-warning system for expired digital certificates. "So it's a big driver for both for cost reduction and security best practices," Einy said. Too Many Choices, Not Enough Code But what about developer choice? Won't developers feel frustrated when barred from using the tools they are most fond of? But this freedom to use any tool available was what led us to the current state of overcomplexity in full-stack development, Einy responded. This is why the role of "full-stack developer" seems like an impossible, given all the possible permutations at each layer of the stack. Like the artist who finds inspiration in a limited palette, the developer should be able to find everything they need in a well-curated platform. "In the past, when we talked about 'you-build-it-you-own-it', we thought that the developer needs to know everything about anything, and they have the full ownership to choose anything that they want. And they got sick of it, right, because they needed to know too much," Einy said. "So I think we are getting into a transition where developers are OK with getting what they need with a click of a button because they have so much work on their own." In this conversation, we also discussed measuring success, the role of access control in DevOps, and open source Backstage platform, and its recent inclusion of paid plug-ins. Give it a listen! Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Jan 18, 2023 • 25min
Platform Engineering Benefits Developers, and Companies Too
In this latest episode of The New Stack Makers podcast, we delve more deeply into the emerging practice of platform engineering. The guests for this show are Aeris Stewart, community manager at platform orchestration provider Humanitec and Michael Galloway, an engineering leader for infrastructure software provider HashiCorp. TNS Features Editor Heather Joslyn hosted this conversation. Although the term has been around for several years, platform engineering caught the industry's attention in a big way last September, when Humanitec published a report that identified how widespread the practice was quickly becoming, citing its use by Nike, Starbucks, GitHub and others. Right after the report was released, Stewart provided an analysis for TNS arguing that platform engineering solved the many issues that another practice, DevOps, was struggling with. "Developers don’t want to do operations anymore, and that’s a bad sign for DevOps," Stewart wrote. The post stirred a great deal of conversation around the success of DevOps. Platform engineering is "a discipline of designing and building tool chains and workflows that enable developer self service," Stewart explained. The purpose is to give the developers in your organization a set of standard tools that will allow them to do their job — write and fix apps — as quickly as possible. The platform provides the tools and services "that free up engineering time by reducing manual toil cognitive load," Galloway added. But platform engineering also has an advantage for the business itself, Galloway elaborated. With an internal developer platform in place, a business can scale up with "reliability, cost efficiency and security," Galloway said. Before HashiCorp, Galloway was an engineer at Netflix, and there he saw the benefits of platform engineering for both the dev and the business itself. "All teams were enabled to own the entire lifecycle from design to operation. This is really central to how Netflix was able to scale," Galloway said. A platform engineering team created a set of services that made it possible for Netflix engineers to deliver code "without needing to be continuous delivery experts." The conversation also touched on the challenges of implementing platform engineering, and what metrics you should use to quantify its success. And because platform engineering is a new discipline, we also discussed education and community. Humanitec's debut PlatformCon drew over 6,000 attendees last June (and Platform 2023 has just been scheduled for June). There is also a platform engineering Slack channel, which has drawn over 8,000 participants thus far. "I think the community is playing a really big role right now, especially as a lot of organizations' awareness of platform engineering is just starting," Stewart said. "There's a lot of knowledge that can be gained by building a platform that you don't necessarily want to learn the hard way." Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

5 snips
Jan 11, 2023 • 23min
What’s Platform Engineering? And How Does It Support DevOps?
Platform engineering “is the art of designing and binding all of the different tech and tools that you have inside of an organization into a golden path that enables self service for developers and reduces cognitive load,” said Kaspar Von Grünberg, founder and CEO of Humanitec, in this episode of The New Stack Makers podcast. This is structure is important for individual contributors, Grünberg said, as well as backend engineers: “if you look at the operation teams, it reduces their burden to do repetitive things. And so platform engineers build and design internal developer platforms, and help and serve users. “ This conversation, hosted by Heather Joslyn, TNS features editor, dove into platform engineering: what it is, how it works, the problems it is intended to solve, and how to get started in building a platform engineering operation in your organization. It also debunks some key fallacies around the concept. This episode was sponsored by Humanitec.The Limits of ‘You Build It, You Run It’The notion of “you build it, you run it” — first coined by Werner Vogels, chief technology officer of [sponsor_inline_mention slug="amazon-web-services-aws" ]Amazon,[/sponsor_inline_mention] in a 2006 interview — established that developers should “own” their applications throughout their entire lifecycle. But, Grünberg said, that may not be realistic in an age of rapidly proliferating microservices and multiple, distributed deployment environments. “The scale that we're operating today is just totally different,” he said. “The applications are much more complex.” End-to-end ownership, he added, is “a noble dream, but unfair towards the individual contributor. We're asking developers to do so much at once. And then we're always complaining that the output isn't there or not delivering fast enough. But we're not making it easy for them to deliver.” Creating a “golden path” — though the creation by platform teams of internal developer platforms (IDPs) — can not only free developers from unnecessary cognitive load, Grünberg said, but also help make their code more secure and standardized. For Ops engineers, he said, the adoption of platform engineering can also help free them from doing the same tasks over and over. “If you want to know whether it's a good idea to look at platform engineering, I recommend go to your service desk and look at the tickets that you're receiving,” Grünberg said. “And if you have things like, ‘Hey, can you debug that deployment?’ and ‘Can you spin up in a moment all these repetitive requests?’ that's probably a good time to take a step back and ask yourself, ‘Should the operations people actually spend time doing these manual things?’”The Biggest Fallacies about Platform EngineeringFor organizations that are interested in adopting platform engineering, the Humanitec CEO attacked some of the biggest misconceptions about the practice. Chief among them: failing to treat their platform as a product, in the same way a company would begin creating any product, by starting with research into customer needs. “If you think about how we would develop a software feature, we wouldn't be sitting in a room and taking some assumptions and then building something,” he said. “We would go out to the user, and then actually interview them and say, ‘Hey, what's your problem? What's the most pressing problem?’” Other fallacies embraced by platform engineering newbies, he said, are “visualization” — the belief that all devs need is another snazzy new dashboard or portal to look at — and believing the platform team has to go all-in right from the start, scaling up a big effort immediately. Such an effort, he said is “doomed to fail.” Instead, Grünberg said, “I'm always advocating for starting really small, come up with what's the most lowest common tech denominator. Is that containerization with EKS? Perfect, then focus on that." And don’t forget to give special attention to those early adopters, so they can become evangelists for the product. “make them fans, prioritize the right way, and then show that to other teams as a, ‘Hey, you want to join in? OK, what's the next cool thing we could build?’” Check out the entire episode for much more detail about platform engineering and how to get started with it. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

4 snips
Jan 4, 2023 • 29min
What LaunchDarkly Learned from 'Eating Its Own Dog Food'
Feature flags — the on/off toggles, written in conditional statements, that allow organizations greater control over the user experience once code has been deployed — are proliferating and growing more complex, and demand robust feature management, said Karishma Irani, head of product at LaunchDarkly, in this episode of The New Stack Makers. In a November survey by LaunchDarkly, which queried more than 1,000 DevOps professionals, 69% of participants said that feature flags are “must-have, mission-critical and/or high priority” for their organizations. “Feature management, we believe, is a modern practice that's becoming more and more common with companies that want to deploy more frequently, innovate faster, and just keep a healthy engineering team,” Irani said. The idea of feature management, Irani said, is to “maximize value while minimizing risk.” LaunchDarkly uses its own software, she said, and eating its own dog food, as the saying goes, has paid off in gaining insights into user needs. As part of LaunchDarkly’s virtual conference Trajectory in November, Irani joined Heather Joslyn, features editor of The New Stack, for a wide-ranging conversation about the latest developments in feature management. This episode of Makers was sponsored by LaunchDarkly.Automating ApprovalsAs an example of the benefits of having first-hand knowledge of how their company's products are used, Irani pointed to an internal project in mid-2022. When the company migrated from [sponsor_inline_mention slug="mongodb" ]MongoDB[/sponsor_inline_mention] to CockroachDB, it used new capabilities in its Feature Workflows product, which allow users to define a workflow that can schedule the gradual release of a feature flag for a future date and time, and automate approval requests. “All of these async processes around approvals schedules, they're critical to releasing software, but they do slow you down and add more potential for manual error or human error,” Irani said. “And so our goal with Feature Workflows was to essentially automate the entire process of a feature release.”Overhauling ExperimentationThis past June, the company also revised its Experimentation offering, she said. Led by James Frost, LaunchDarkly’s head of experimentation, the team did “a complete overhaul of our stats engine, they enhanced the integration path of our customers’ existing data sets and metrics,” Irani said. “They redesigned our UX and the codified model and experimentation best practices into the product itself.” For instance, a new metric import API helps prevent the problem of multiple teams or users within a company using different tools for A/B and other experiments. It “significantly cuts down on manual duplicate work when importing metrics for experimentation,” said Irani. “So you can get set up faster.” Another addition to the Experimentation product is a sample ratio mismatch test, she said, so “you can be confident that all of your experiments are correctly allocating traffic to each variant.” These innovations, along with new capabilities to the company’s Core Flagging Platform, are in general availability. On the horizon — and now available through LaunchDarkly’s early access program, is Accelerate, which lets users track and visualize key engineering metrics, such as deployment frequency, release frequency, lead time for code changes, and flag coverage. “I'm sure you've caught on already,” Irani said, “but a few of these are Dora metrics, which obviously are extremely critical to our users.” Check out the entire episode for more details on what’s new from LaunchDarkly and the problems that innovators in the feature management space still need to solve. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Dec 28, 2022 • 15min
Hazelcast and the Benefits of Real Time Data
In this latest podcast from The New Stack, we interview Manish Devgan, chief product officer for Hazelcast, which offers a real time stream processing engine. This interview was recorded at KubeCon+CloudNativeCon, held last October in Detroit. "'Real time' means different things to different people, but it's really a business term," Devgan explained. In the business world, time is money, and the more quickly you can make a decision, using the right data, the more quickly one can take action. Although we have many "batch-processing" systems, the data itself rarely comes in batches, Devgan said. "A lot of times I hear from customers that are using a batch system, because those are the things which are available at that time. But data is created in real time sensors, your machines, espionage data, or even customer data — right when customers are transacting with you." What is a Real Time Data Processing Engine? A real time data processing engine can analyze data as it is coming in from the source. This is different from traditional approaches that store the data first, then analyze it later. Bank loans may is example of this approach. With a real time data processing engine in place, a bank can offer a loan to a customer using an automated teller machine (ATM) in real time, Devgan suggested. "As the data comes in, you can actually take action based on context of the data," he argued. Such a loan app may combine real-time data from the customer alongside historical data stored in a traditional database. Hazelcast can combine historical data with real time data to make workloads like this possible. In this interview, we also debated the merits of Kafka, the benefits of using a managed service rather than running an application in house, Hazelcast's users, and features in the latest release of the Hazelcast platform. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Dec 22, 2022 • 27min
Hachyderm.io, from Side Project to 38,000+ Users and Counting
Back in April, Kris Nóva, now principal engineer at GitHub, started creating a server on Mastodon as a side project in her basement lab. Then in late October, Elon Musk bought Twitter for an eye-watering $44 billion, and began cutting thousands of jobs at the social media giant and making changes that alienated longtime users. And over the next few weeks, usage of Nóva’s hobby site, Hachyderm.io, exploded. “The server started very small,” she said on this episode of The New Stack Makers podcast. “And I think like, one of my friends turned into two of my friends turned into 10 of my friends turned into 20 colleagues, and it just so happens, a lot of them were big names in the tech industry. And now all of a sudden, I have 30,000 people I have to babysit.” Though the rate at which new users are joining Hachyderm has slowed down in recent days, Nóva said, it stood at more than 38,000 users as of Dec. 20. Hachyderm.io is still run by a handful of volunteers, who also handle content moderation. Nóva is now seeking nonprofit status for it with the U.S. Internal Revenue Service, with intentions of building a new organization around Hachyderm. This episode of Makers, hosted by Heather Joslyn, TNS features editor, recounts Hachyderm’s origins and the challenges involved in scaling it as Twitter users from the tech community gravitated to it. Nóva and Joslyn were joined by Gabe Monroy, chief product officer at DigitalOcean, which has helped Hachyderm cope with the technical demands of its growth spurt.HugOps and Solving Storage IssuesSuddenly having a social media network to “babysit” brings numerous challenges, including the technical issues involved in a rapid scale up. Monroy and Nóva worked on Kubernetes projects when both were employed at Microsoft, “so we’re all about that horizontal distribution life.” But the Mastodon application’s structure proved confounding. “Here I am operating a Ruby on Rails monolith that's designed to be vertically scaled on a single piece of hardware,” Nóva said. “And we're trying to break that apart and run that horizontally across the rack behind me. So we got into a lot of trouble very early on by just taking the service itself and starting to decompose it into microservices.” Storage also rapidly became an issue. “We had some non-enterprise but consumer-grade SSDs. And we were doing on the order of millions of reads and writes per day, just keeping the Postgres database online. And that was causing cascading failures and cascading outages across our distributed footprint, just because our Postgres service couldn't keep up.” DigitalOcean helped with the storage issues; the site now uses a data center in Germany, whose servers DigitalOcean manages. (Previously, its servers had been living in Nóva’s basement lab.) Monroy, longtime friends with Nóva, was an early Hachyderm user and reached out when he noticed problems on the site, such as when he had difficulty posting videos and noticed other people complaining about similar problems. “This is a ‘success failure’ in the making here, the scale of this is sort of overwhelming,” Monroy said. “So I just texted Nóva, ‘Hey, what's going on? Anything I could do to help?’ “In the community, we like to talk about the concept of HugOps, right? When people are having issues on this stuff, you reach out, try and help. You give a hug. And so, that was all I did. Nóva is very crisp and clear: This is what I got going on. These are the issues. These are the areas where you could help.”Sustaining ‘the NPR of Social Media’One challenge in particular has nudged Nóva to seek nonprofit status: operating costs. “Right now, I'm able to just kind of like eat the cost myself,” she said. “I operate a Twitch stream, and we're taking the proceeds of that and putting it towards operating service.” But that, she acknowledges, won’t be sustainable as Hachyderm grows. “The whole goal of it, as far as I'm concerned, is to keep it as sustainable as possible,” Nóva said. “So that we're not having to offset the operating costs with ads or marketing or product marketing. We can just try to keep it as neutral and, frankly, boring as possible — the NPR of social media, if you could imagine such a thing.” Check out the full episode for more details on how Hachyderm is scaling and plans for its future, and Nóva and Monroy’s thoughts about the status of Twitter. Feedback? Find me at @hajoslyn on Hachyderm.io. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Dec 20, 2022 • 23min
Automation for Cloud Optimization
During the pandemic, many organizations sped up their move to the cloud — without fully understanding the costs, both human and financial, they would pay for the convenience and scalability of a digital transformation. “They really didn’t have a baseline,” said Mekka Williams, principal engineer, at Spot by NetApp, in this episode of The New Stack Makers podcast. “And so the those first cloud bills, I'm sure were shocking, because you don't get a cloud bill, when you run on your on-premises environment, or even your private cloud, where you've already paid the cost for the infrastructure that you're using. What’s especially worrisome is that many of those costs are simply wasted, Williams said. “Most of the containerized applications running in Kubernetes clusters are running underutilized,” she said. “And anything that's underutilized in the cloud equates to waste. And if we want to be really lean and clean and use resources in a very efficient manner, we have to have really good cloud strategy in order to do that.” This episode of The New Stack Makers, hosted by Heather Joslyn, TNS features editor, focused on CloudOps, which in this case stands for “cloud operations.” (It can also stand for “cloud optimization,” but more about that later.) The conversation was sponsored by Spot by NetApp. Automation for Cloud Optimization Many organizations that moved quickly to the cloud during the dog days of the pandemic have begun to revisit the decisions they made and update their strategies, Williams said. “We see some organizations that are trying to modernize their applications further, to make better use of the services that are available in the cloud,” she said. “The cloud is getting more complex as they grow and mature in their journey. “And so they're looking for ways to simplify their operations. And as always keep their costs down. Keep things simple for their DevOps and SRE, to is not incur additional technical debt, but still make the most make the best use out of their cloud, wherever they are.” Automation holds the key to CloudOps — both definitions — according to Williams. For starters, it makes teams more efficient. “The less tasks that your workforce have to perform manually, the more time they have to spend focused on business logic and being innovative,” Williams said. “Automation also helps you with repeatability. And it's less error-prone, and it helps you standardize. Really good automation simplifies your environment greatly.” Automating repetitive tasks can also help prevent your site reliability engineers (SREs) from burnout, she said. Practicing “good data hygiene,” Williams said, also helps contain costs and reduce toil: “Making sure you're using the right tier of data, making sure you're not over-provisioned. And the type of storage you need, you don't need to pay top dollar for high-performing storage, if it's just backup data that doesn't get accessed that often.” Such practices are “good to know on-premises, but these are imperative to know when you're in the cloud,” she said, in order to reduce waste. During this episode, Williams pointed to solutions in the Spot by Netapp portfolio that use automation to help make the most of cloud infrastructure, such as its flagship product, Elastigroup, which takes advantage of excess capacity to scale workloads. In June, Spot by NetApp acquired Instaclustr, a solution for managing open source database and streaming technologies. The company recognizes the growing importance of open source for enterprises. “We're paying attention to trends for cloud applications,” Williams said, “and we're growing the portfolio to address the needs that are top of mind for those customers.” Check out the entire episode to learn more about CloudOps. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.


