

The New Stack Podcast
The New Stack
The New Stack Podcast is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software.
For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack
For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack
Episodes
Mentioned books

Nov 15, 2022 • 18min
OpenTelemetry Properly Explained and Demoed
OpenTelemetry project offers vendor-neutral integration points that help organizations obtain the raw materials — the "telemetry" — that fuel modern observability tools, and with minimal effort at integration time. But what does OpenTelemetry mean for those who use their favorite observability tools but don’t exactly understand how it can help them? How might OpenTelemetry be relevant to the folks who are new to Kuberentes (the majority of KubeCon attendees during the past years) and those who are just getting started with observability? Austin Parker, head of developer relations, Lightstep and Morgan McLean, director of product management, Splunk, discuss during this podcast at KubeCon + CloudNativeCon 2022 how the OpenTelemetry project has created demo services to help cloud native community members better understand cloud native development practices and test out OpenTelemetry, as well as Kubernetes, observability software, etc. At this conjecture in DevOps history, there has been considerable hype around observability for developers and operations teams, and more recently, much attention has been given to helping combine the different observability solutions out there in use through a single interface, and to that end, OpenTelemetry has emerged as a key standard. DevOps teams today need OpenTelemetry since they typically work with a lot of different data sources for observability processes, Parker said. “If you want observability, you need to transform and send that data out to any number of open source or commercial solutions and you need a lingua franca to to be consistent. Every time I have a host, or an IP address, or any kind of metadata, consistency is key and that's what OpenTelemetry provides.” Additionally, as a developer or an operator, OpenTelemetry serves to instrument your system for observability, McLean said. “OpenTelemetry does that through the power of the community working together to define those standards and to provide the components needed to extract that data among hundreds of thousands of different combinations of software and hardware and infrastructure that people are using,” McLean said. Observability and OpenTelemetry, while conceptually straightforward, do require a learning curve to use. To that end, the OpenTelemetry project has released a demo to help. It is intended to both better understand cloud native development practices and to test out OpenTelemetry, as well as Kubernetes, observability software, etc.,the project’s creators say. OpenTelemetry Demo v1.0 general release is available on GitHub and on the OpenTelemetry site. The demo helps with learning how to add instrumentation to an application to gather metrics, logs and traces for observability. There is heavy instruction for open source projects like Prometheus for Kubernetes and Jaeger for distributed tracing. How to acquaint yourself with tools such as Grafana to create dashboards are shown. The demo also extends to scenarios in which failures are created and OpenTelemetry data is used for troubleshooting and remediation. The demo was designed for the beginner or the intermediate level user, and can be set up to run on Docker or Kubernetes in about five minutes. “The demo is a great way for people to get started,” Parker said. “We've also seen a lot of great uptake from our commercial partners as well who have said ‘we'll use this to demo our platform.’” Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Nov 10, 2022 • 16min
The Latest Milestones on WebAssembly's Road to Maturity
DETROIT — Even in the midst of hand-wringing at KubeCon + CloudNativeCon North America about how the global economy will make it tough for startups to gain support in the near future, the news about a couple of young WebAssembly-centric companies was bright. Cosmonic announced that it had raised $8.5 million in a seed round led by Vertex Ventures. And Fermyon Technologies unveiled both funding and product news: a $20 million A Series led by Insight Partners (which also owns The New Stack) and the launch of Fermyon Cloud, a hosted platform for running WebAssembly (Wasm) microservices. Both Cosmonic and Fermyon were founded in 2021. “A lot of people think that Wasm is this maybe up and coming thing, or it's just totally new thing that's out there in the future,” noted Bailey Hayes, a director at Cosmonic, in this episode of The New Stack Makers podcast. But the future is already here, she said: “It's one of technology's best kept secrets, because you're using it today, all over. And many of the applications that we use day-to-day — Zoom, Google Meet, Prime Video, I mean, it really is everywhere. The thing that's going to change for developers is that this will be their compilation target in their build file.” In this On the Road episode of Makers, recorded at KubeCon here in the Motor City, Hayes and Kate Goldenring, a software engineer at Fermyon, spoke to Heather Joslyn, TNS’ features editor, about the state of WebAssembly. This episode was sponsored by the Cloud Native Computing Foundation (CNCF). Wasm and Docker, Java, Python WebAssembly – the roughly five-year-old binary instruction format for a stack-based virtual machine, is designed to execute binary code on the web, lets developers bring the performance of languages like C, C++, and Rust to the web development area. At Wasm Day, a co-located event that preceded KubeCon, support for a number of other languages — including Java, .Net, Python and PHP — was announced. At the same event, Docker also revealed that it has added Wasm as a runtime that developers can target; that feature is now in beta. Such steps move WebAssembly closer to fulfilling its promise to devs that they can “build once, run anywhere.” “With Wasm, developers shouldn't need to know necessarily that it's their compilation target,” said Hayes. But, she added, “what you do know is that you're now able to move that Wasm module anywhere in any cloud. The same one that you built on your desktop that might be on Windows can go and run on an ARM Linux server.” Goldenring pointed to the findings of the CNCF’s “mini survey” of WebAssembly users, released at Wasm Day, as evidence that the technology’s user cases are proliferating quickly. “Even though WebAssembly was made for the web, the number one response —it was around a little over 60% — said serverless,” she noted. “And then it said, the edge and then it said web development, and then it said IoT, and the use cases just keep going. And that's because it is this incredibly powerful, portable target that you can put in all these different use cases. It's secure, it has instant startup time.” Worlds and Warg Craft The podcast guests talked about recent efforts to make it easier to use Wasm, share code and reuse it, including the development of the component model, which proponents hope will simplify how WebAssembly works outside the browser. Goldenring and Hayes discussed efforts now under construction, including “worlds” files and Warg, a package registry for WebAssembly. (Hayes co-presented at Wasm Day on the work being done on WebAssembly package management, including Warg.) A world file, Hayes said, is a way of defining your environment. "One way to think of it is like .profile, but for Wasm, for a component. And so it tells me what types of capabilities I need for my web module to run successfully in the runtime and can read that and give me the right stuff.” And as for Warg, Hayes said: “It's really a protocol and a set of APIs, so that we can slot it into existing ecosystems. A lot of people think of it as us trying to pave over existing technologies. And that's really not the case. The purpose of Warg is to be able to slot right in, so that you continue working in your current developer environment and experience and using the packages that you're used to. But get all of the advantages of the component model, which is this new specification we've been working on" at the W3C's WebAssembly Working Group. Goldenring added another finding from the CNCF survey: “Around 30% of people wanted better code reuse. That's a sign of a more mature ecosystem. So having something like Warg is going to help everyone who's involved in the server side of the WebAssembly space.” Listen to the full conversation to learn more about WebAssembly and how these two companies are tackling its challenges for developers. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Nov 9, 2022 • 14min
Zero Trust Security and the HashiCorp Cloud Platform
Organizations are now, almost by default, now becoming multi-cloud operations. No cloud service offers the full breadth of what an enterprise may need, and enterprises themselves find themselves using more than one service, often inadvertently. HashiCorp is one company preparing enterprises for the challenges with managing more than a single cloud, through the use of a coherent set of software tools. To learn more, we spoke with Megan Laflamme, HashiCorp director of product marketing, at the HashiConf user conference, for this latest episode of The New Stack Makers podcast. We talked about zero trust computing, the importance identity and the general availability of HashiCorp Boundary single sign-on tool. "In the cloud operating model, the [security] perimeter is no longer static, and you move to a much more dynamic infrastructure environment," she explained.What is the HashiCorp Cloud Platform?The HashiCorp Cloud Platform (HCP) is a fully-managed platform offering HashiCorp software including Consul, Vault, and other services, all connected through HashiCorp Virtual Networks (HVN). Through a web portal or by Terraform, HCP can manage log-ins, access control, and billing across multiple cloud assets. The HashiCorp Cloud Platform now offers the ability to do single sign-on, reducing a lot of the headache of signing into multiple applications and services.What is HashiCorp Boundary?Boundary is the client that enables this “secure remote access” and is now generally available to users of the platform. It is a remote access client that manages fine-grained authorizations through trusted identities. It provides the session connection, establishment, and credential issuance and revocation. "With Boundary, we enable a much more streamlined workflow for permitting access to critical infrastructure where we have integrations with cloud providers or service registries," Laflamme said. The HCP Boundary is a fully managed version of HashiCorp Boundary that is run on the HashiCorp Cloud. With Boundary, the user signs on once, and everything else is handled beneath the floorboards, so to speak. Identities for applications, networks, and people are handled through HashiCorp Vault and HashiCorp Consul. Every action is authorized and documented. Boundary authenticates and authorizes users, by drawing on existing identity providers (IDPs) such as Okta, Azure Active Directory, and GitHub. Consul authenticates and authorizes access between applications and services. This way, networks aren’t exposed, and there is no need to issue and distribute credentials. Dynamic credential injection for user sessions is done with HashiCorp Vault, which injects single-use credentials for passwordless authentication to the remote host.What is Zero Trust Security?With zero trust security, users are authenticated at the service level, rather than through a centralized firewall, which becomes increasingly infeasible in multicloud designs. In the industry, there is a shift “from high trust IP based authorization in the more static data centers and infrastructure, to the cloud, to a low trust model where everything is predicated on identity,” Laflamme explained. This approach does require users to sign on to each individual service, in some form, which can be a headache to those (i.e. developers and system engineers) who sign on to a lot of apps in their daily routine. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Nov 8, 2022 • 21min
How Do We Protect the Software Supply Chain?
DETROIT — Modern software projects’ emphasis on agility and building community has caused a lot of security best practices, developed in the early days of the Linux kernel, to fall by the wayside, according to Aeva Black, an open source veteran of 25 years. “And now we're playing catch up,“ said Black, an open source hacker in Microsoft Azure’s Office of the CTO “A lot of less than ideal practices have taken root in the past five years. We're trying to help educate everybody now.” Chris Short, senior developer advocate with Amazon Web Services (AWS), challenged the notion of “shifting left” and giving developers greater responsibility for security. “If security is everybody's job, it's nobody's job,” said Short, founder of the DevOps-ish newsletter. “We've gone through this evolution: just develop secure code, and you'll be fine,” he said. “There's no such thing as secure code. There are errors in the underlying languages sometimes …. There's no such thing as secure software. So you have to mitigate and then be ready to defend against coming vulnerabilities.” Black and Short talked about the state of the software supply chain’s security in an On the Road episode of The New Stack Makers podcast. Their conversation with Heather Joslyn, features editor of TNS, was recorded at KubeCon + CloudNativeCon North America here in the Motor City. This podcast episode was sponsored by AWS.‘Trust, but Verify’For our podcast guests, “trust, but verify” is a slogan more organizations need to live by. A lot of the security problems that plague the software supply chain, Black said, are companies — especially smaller organizations — “just pulling software directly from upstream. They trust a build someone's published, they don't verify, they don't check the hash, they don't check a signature, they just download a Docker image or binary from somewhere and run it in production.” That practice, Black said, “exposes them to anything that's changed upstream. If upstream has a bug or a network error in that repository, then they can't update as well.” Organizations, they said, should maintain an internal staging environment where they can verify code retrieved from upstream before pushing it to production — or rebuild it, in case a vulnerability is found, and push it back upstream. That build environment should also be firewalled, Short added: “Create those safeguards of, ‘Oh, you want to pull a package from not an approved source or not a trusted source? Sorry, not gonna happen.’” Being able to rebuild code that has vulnerabilities to make it more secure — or even being able to identify what’s wrong, and quickly — are skills that not enough developers have, the podcast guests noted. More automation is part of the solution, Short said. But, he added, by itself it's not enough. “Continuous learning is what we do here as a job," he said. "If you're kind of like, this is my skill set, this is my toolbox and I'm not willing to grow past that, you’re setting yourself up for failure, right? So you have to be able to say, almost at a moment's notice, ‘I need to change something across my entire environment. How do I do that?’”GitBOM and the ‘Signal-to-Noise Ratio’As both Black and Short said during our conversation, there’s no such thing as perfectly secure code. And even such highly touted tools as software bills of materials, or SBOMs, fall short of giving teams all the information they need to determine code’s safety. “Many projects have dependencies 10, 20 30 layers deep,” Black said. “And so if your SBOM only goes one or two layers, you just don't have enough information to know if as a vulnerability five or 10 layers down.” Short brought up another issue with SBOMs: “There's nothing you can act on. The biggest thing for Ops teams or security teams is actionable information.” While Short applauded recent efforts to improve user education, he said he’s pessimistic about the state of cybersecurity: “There’s not a lot right now that's getting people actionable data. It's a lot of noise still, and we need to refine these systems well enough to know that, like, just because I have Bash doesn't necessarily mean I have every vulnerability in Bash.” One project aimed at addressing the situation is GitBOM, a new open source initiative. “Fundamentally, I think it’s the best bet we have to provide really high fidelity signal to defense teams,” said Black, who has worked on the project and produced a white paper on it this past January. GitBOM — the name will likely be changed, Black said —takes the underlying technology that Git relies on, using a hash table to track changes in a project's code over time, and reapplies it to track the supply chain of software. The technology is used to build a hash table connecting all of the dependencies in a project and building what GItBOM’s creators call an artifact dependency graph. “We had a team working on it a couple of proof of concepts right now,” Black said. “And the main effect I'm hoping to achieve from this is a small change in every language and compiler … then we can get traceability across the whole supply chain.” In the meantime, Short said, there’s plenty of room for broader adoption of the best practtices that currently exist. “Security vendors, I feel, like need to do a better job of moving teams in the right direction as far as action,” he said. At DevOps Chicago this fall, Short said, he ran an open space session in which he asked participants for their pain points related to working with containers “And the whole room admitted to not using least privilege, not using policy engines that are available in the Kubernetes space,” he said. “So there's a lot of complexity that we’ve got to help people understand the need for it, and how to implement it.” Listen to whole podcast to learn more about the state of software supply chain security. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Nov 4, 2022 • 16min
Ukraine Has a Bright Future
Ukraine has a bright future. It will soon be time to rebuild. But rebuilding requires more than the resources needed to construct a hydroelectric plant or a hospital. It involves software and an understanding of how to use it. Ihor Dvoretskyi, developer advocate at the Cloud Native Computing Foundation (CNCF), and Dima Zakhalyavko, board member at Razom for Ukraine, came to KubeCon in Detroit to discuss the push to provide training materials for Ukraine as they rebuild from the destruction caused by Russia's invasion. Razom, a nonprofit, amplifies the voices of Ukrainians in the United States and helps with humanitarian efforts and IT training. Razom formed before Russia's 2014 invasion of the Crimean peninsula of Ukraine, Zakhalyavko said. Since the full-scale invasion earlier this year, Razom has had an understandable increase in donations and volunteers helping in their efforts. Individual first aid kits for soldiers, tourniquets, and medics supplies are provided by Razom, but so is IT training, materials to train the next generation of IT, translated into Ukrainian. The Linux Foundation is participating with the Cloud Native Computing Foundation (CNCF) in participation with Razom for Ukraine on its Project Veteranius to provide access to technology education for Ukrainian veterans, their families, and Ukrainians in need. "We've realized that basically, we can benefit from the Linux Foundation training portfolio, including the most popular courses like the intro to Linux, or intro to Kubernetes, that can be pretty much easily translated to Ukrainian," Dvoretskyi said. "And in this way, we'll be able to offer the educational materials in their native language." Ukraine has a pretty bright future. "We just need to get through these difficult times," Dvoretskyi said. "But in the future, it's clear the tech industry in Ukraine is growing. Yeah. And people are needed for that." Every effort matters, Dvoretskyi said. "A strong, democratic Ukraine – that's essentially the vision – a European country, a truly European country, that is whole in terms of territorial integrity," Zakhalyavko said. "The future is in technology. And if we can help enable that – in any case, I think that's a win for Ukraine and the world. Technology can make the world a better place." Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Nov 3, 2022 • 16min
Redis is not just a Cache
Redis is not just a cache. It is used in the broader cloud native ecosystem, fits into many service-oriented architectures, and simplifies the deployment and development of modern applications, according to Madelyn Olson, a principal engineer at AWS, during an interview on the New Stack Makers at KubeCon North America in Detroit. Olson said that people have a primary backend database or some other workflow that takes a long time to run. They store the intermediate results in Redis, which provides lower latency and higher throughput. "But there are plenty of other ways you can use Redis," Olson said. "One common way is what I like to call it a data projection API. So you basically take a bunch of different sources of data, maybe a Postgres database, or some other type of Cassandra database, and you project that data into Redis. And then you just pull from the Redis instance. This is a really great, great use case for low latency applications." Redis creator Salvatore Sanfilippo's approach provides a lesson in how to contribute to open source, which Olson recounted in our interview. Olson said he was the only maintainer with write permissions for the project. That meant contributors would have to engage quite a bit to get a response from Sanfilippo. So Olson did what open source contributors do when they want to get noticed. She "chopped wood and carried water," a term that in open source reflects on working to take care of tasks that need attention. That helped Sanfilippo scale himself a bit and helped Olson get involved in the project. It is daunting to get into open source development work, Olson said. A new contributor will face people with a lot more experience and get afraid to open issues. But if a contributor has a use case and helps with documentation or a bug, then most open source maintainers are willing to help. "One big problem throughout open source is, they're usually resource constrained, right?," Olson said. "Open source is oftentimes a lot of volunteers. So they're usually very willing to get more people to help with the project." What's it like now working at AWS on open source projects? Things have changed a lot since Olson joined AWS in 2015, Olson said. APIs were proprietary back in those days. Today, it's almost the opposite of how it used to be. To keep something internal now requires approval, Olson said. Internal differentiation is not needed. For example, open source Redis is most important, with AWS on top as the managed service. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Nov 2, 2022 • 14min
Case Study: How BOK Financial Managed Its Cloud Migration
LOS ANGELES — When you’re deploying a business-critical application to the cloud, it’s nice to not need the “war room” you’ve assembled to troubleshoot Day 1 problems. When BOK Financial, a financial services company that’s been moving apps to the cloud over the last three years, was launching its largest application on the cloud, its engineers supported it with a “war room type situation, monitoring everything” according to BOK’s Andrew Rau. “After the first day, the system just scaled like it was supposed to … and they're like, ‘OK, I guess we don't need this anymore.’” In this On the Road episode of The New Stack’s Makers podcast, Rau, BOK’s vice president and manager, cloud services, offered a case study about his organization’s cloud journey over the past four years, and the role HashiCorp’s Vault and Cloud Platform played in it. Rau spoke to Heather Joslyn, features editor of The New Stack, about the challenges of moving a very traditional organization in a highly regulated industry to the cloud while maintaining tight security and resilience. This episode of Makers, recorded in October at HashiConf in Los Angeles, was and sponsored by HashiCorp.Upskilling for ‘Everything as Code’In late 2019, Rau said, BOK Financial deployed one small application to the cloud, an initial step on its digital transformation journey. It’s been building out its cloud infrastructure ever since, and soon ran into the limits of each cloud provider’s native tooling. “Where we struggled was we didn't want to deploy and manage our clouds in different ways,” he said. “We didn't want our cloud engineers to know just one cloud provider, and their technology and their tech stack. So that's when we really started looking at how else can we do this. And that's when Terraform was a great option for us.” In 2020, BOK Financial began using HashCorp’s open source Terraform to automate the creation of cloud infrastructure. “We made a conscious effort to really focus on automation,” Rau said. “We didn't want to do things manually, which is really that traditional data center, how we've done things for decades. In tandem with adopting Terraform, BOK Financial’s teams began using GitOps processes for CI/CD. But doing “everything as code,” as Rau put it, “required a lot of upskilling for some of our staff, because they've never done version control or automation capabilities. So in addition to learning Terraform, and these other cloud concepts, they had to learn all of that.” The challenge, though, has been worth it: “It's really empowered us to move a lot faster, and give our application teams the ability to deploy at their pace, versus waiting on other teams.”Seeking Automated SecurityIt took about a year, Rau said, to get BOK Financial’s developers comfortable using Terraform, largely because many were new to version control procedures and strategies. Because the company works in a highly regulated industry, handling customers’ financial data, security is of utmost importance. “We had users credentials for our clouds, and we had them separated out based on the type of deployment that [developers] were doing,” said Rau. “But it wasn't easy for us to rotate those credentials on a frequent basis. And so we really felt the need that we want to make these short, limited tokens, no more than an hour for that deployment. And so that's where we looked at Vault.” HashiCorp’s secrets storage and management tool proved an easy add-on with Terraform. “That's really given us the ability to have effectively no credentials — long-lived credentials — out there,” Rau said. “And secure our environment even more.” And because BOK’s teams don’t want to manage Vault and its complexities themselves, it has opted for HashiCorp Cloud Platform to manage it. For other organizations on a cloud native journey, Rau recommended taking time to do things right. “We went back to rework some things periodically, because we learned something too late,” he said. Also, he advised, keep stakeholders in the loop: “You need to stay in front of the communication with business partners, IT leaders, that it's going to take longer to set this up. But once you do, it's incredible.” Check out the podcast to learn more about BOK Financial's cloud native transformation. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

7 snips
Nov 1, 2022 • 42min
Devs and Ops: Can This Marriage Be Saved?
DETROIT — Are we still shifting left? Is it realistic to expect developers to take on the burdens of security and infrastructure provisioning, as well as writing their applications? Is platform engineering the answer to saving the DevOps dream? Bottom line: Do Devs and Ops really talk to each other — or just passive-aggressively swap Jira tickets? These are some of the topics explored by a panel, “Devs and Ops People: It’s Time for Some Kubernetes Couples Therapy,” convened by The New Stack at KubeCon + CloudNativeCon North America, here in the Motor City, on Thursday. Panelists included Saad Malik, chief technology officer and co-founder of Spectro Cloud; Viktor Farcic, developer advocate at Upbound; Liz Rice, chief open source officer at Isolalent, and Aeris Stewart, community manager at Humanitec. The latest TNS pancake breakfast was hosted by Alex Williams, The New Stack’s founder and publisher, with Heather Joslyn, TNS features editor, fielding questions from the audience. The event was sponsored by Spectro Cloud. Alleviating Cognitive Load for Devs A big pain point in the DevOps structure — the marriage of frontend and backend in cross-functional teams — is that all devs aren’t necessarily willing or able to take on all the additional responsibilities demanded of them. A lot of organizations have “copy-pasted this one size fits all approach to DevOps,” said Stewart. “If you look at the tooling landscape, it is rapidly growing not just in terms of the volume of tools, but also the complexity of the tools themselves,” they said. “And developers are in parallel expected to take over an increasing amount of the software delivery process. And all of this, together, is too much cognitive load for them.” This situation also has an impact on operations engineers, who must help alleviate developers’ burdens. “It’s causing a lot of inefficiencies of these organizations,” they added, “and a lot of the same inefficiencies that DevOps was supposed to get rid of.” Platform engineering — in which operations engineers provide devs with an internal developer platform that abstracts away some of the complexity — is “a sign of hope,” Stewart said, for organizations for whom DevOps is proving tough to implement. The concept behind DevOps is “about making teams self-sufficient, so they have full control of their application, right from the idea until it is running in production,” said Farcic. But, he added, “you cannot expect them to have 17 years of experience in Kubernetes, and AWS and whatnot. And that's where platforms come in. That's how other teams, who have certain expertise, provide services so that those … developers and operators can actually do the work that they're supposed to do, just as operators today are using services from AWS to do their work. So what AWS for Ops is to Ops, to me, that's what internal developer platforms are to application developers.” Consistency vs. Innovation Platform engineering has been a hot topic in DevOps circles (and at KubeCon) but the definition remains a bit fuzzy, the panelists acknowledged. (“In a lot of organizations, ‘platform engineering’ is just a fancy new way of saying ‘Ops,’” said Rice.) The audience served up questions to the panel about the limits of the DevOps model and how platform engineering fits into that discussion. One audience member asked about balancing the need to provide a consistent platform to an organization’s developers while also allowing devs to customize and innovate. Malik said that both consistency and innovation are possible in a platform engineering structure. “An organization will decide where they want to be able to provide that abstraction,” he said, adding, “When they think about where they want to be as a whole, they could think about, Hey, when we provide our platform, we're going to be providing everything from security to CI/CD from GitHub, from repository management, this is what you will get if you use our IDP or platform itself. But “there are going to be unique use cases,” Malik added, such as developers who are building a new blockchain technology or running WebAssembly. “I think it's okay to give those development teams the ability to run their own platform, as long as you tell them, these are the areas that you have to be responsible for,” he said. “ You're responsible for your own security, your own backup, your own retention capabilities.” One audience member mentioned “Team Topologies,” a 2019 engineering management book by Manuel Pais and Matthew Skelton, and asked the panel if platform engineering is related to DevOps in that it’s more of an approach to engineering management than a destination. “Platform engineering is in the budding stage of its evolution,” said Stewart. “And right now, it's really focused on addressing the problems that organizations ran into when they were implementing DevOps. They added, “I think as we see the community come together more and get more best practices about how to develop platform, you will see it become more than just a different approach to DevOps and become something more distinct. But I don't think it's there quite yet.” Check out the full panel discussion to hear more from our DevOps “counseling session.” Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Oct 26, 2022 • 18min
Latest Enhancements to HashiCorp Terraform and Terraform Cloud
What is Terraform?Terraform is HashiCorp’s flagship software. The open source tool provides a way to define IT resources — such as monitoring software or cloud services — in human-readable configuration files. These files, which serve as blueprints, can then be used to automatically provision the systems themselves. Kubernetes deployments, for instance, can be streamlined through Terraform. "Terraform basically translates what your configuration was codified in by your configuration, and provisions it to that desired end state," explained Meghan Liese, [sponsor_inline_mention slug="hashicorp" ]HashiCorp[/sponsor_inline_mention] vice president of product and partner marketing in this podcast and video recording, recorded at the company's user conference, HashiConf 2022, held this month in Los Angeles. For this interview, Liese discusses the latest enhancements to Terraform, and Terraform Cloud, a managed service offering that is part of the HashiCorp Cloud Platform. [Embed Podcast]Why Should Developers be Interested in Terraform?Typically, the DevOps teams, or system administrators, use Terraform to provision infrastructure, but there is also growing interest to allow developers to do it themselves, in a self-service fashion, Liese explained. Multicloud skills are in short supply, concluded the 2022 HashiCorp State of Cloud Strategy Survey, so making the provision process easier could help more developers, the company reckons. A Terraform self-service model, which was introduced earlier this year, could “cut down on the training an organization would need to do to get developers up to speed on using the infrastructure-as-code software,” Liese said. In this “no code” setup, developers can pick from a catalog of no-code-ready modules, which can be deployed directly to workspaces. No need to learn the HCL configuration language. And the administrators will no longer have to answer the same “how-do-I-do-this-in-HCL?” queries. The new console interface aims to greatly expand the use of Terraform. The company has been offering self-service options for a while, by way of an architecture that allows for modules to be reused through the private registry for Terraform Cloud and Terraform Enterprise.What is the Make Code Block and Why is it Important?The recent release of Terraform 1.3 came with the promise to greatly reduce the amount of code HCL jockeys must manage, through the improvement of the make code block. Actually, make has been available since Terraform 1.1, but some kinks were worked out for this latest release. What make does is provide the ability to refactor resources within a Terraform configuration file, moving large code blocks off as separate modules, where they can be discovered through a public or private registry.What is Continuous Validation?With the known state of a system captured on Terraform, it is a short step to check to ensure that the actual running system is identical to the desired state captured in HCL. Many times “drift” can occur, as administrators, or even the apps themselves, make changes to the system. Especially in regulated environments, such as hospitals, it is essential that a system is in a correct state. Earlier this year, HashiCorp added Drift Detection to Terraform Cloud to continuously check infrastructure state to detect changes and provide alerts and offer remediation if that option is chosen. Now, another update, Continuous validation expands these checks to include user assertions, or post-conditions, as well. One post-condition may be something like ensuring that certificates haven’t expired. If they do, the software can offer an alert to the admin to update the certs. Another condition might be to check for new container images, which may have been updated as a response to a security patch. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Oct 20, 2022 • 27min
How ScyllaDB Helped an AdTech Company Focus on Core Business
GumGum is a company whose platform serves up online ads related to the context in which potential customers are already shopping or searching. (For instance: it will send ads for Zurich restaurants to someone who’s booked travel to Switzerland.) To handle that granular targeting, it relies on its proprietary machine learning platform, Verity. “For all of our publishers, we send a list of URLs to Verity,” according to Keith Sader, GumGum’s director of engineering. “Verity goes in and basically categorizes those URLs as different [internal bus] categories. So the IB has tons of taxonomies, based on autos, based upon clothing based upon entertainment. And then that's how we do our targeting.” Verity’s targeting data is stored in DynamoDB, but the rest of GumGum’s data is stored in managed MySQL and its daily tracking data is stored in ScyllaDB, a database designed for data-intensive applications. Scylla, Sader said, helps his company avoid serving audiences the same ads over and over again, by keeping track of which ads customers have already seen. “That’s where Scylla comes into the picture for us,” he said. “Scylla is our rate limiter on ad serving.” In this episode of The New Stack’s Makers podcast, Sader and Dor Laor, CEO and co-founder of Scylla, told how GumGum has used ScyllaDB shift more IT resources to its core business and keep it from repeating ads to audiences that have already seen them, no matter where they travel. This case study episode of Makers, hosted Heather Joslyn, TNS features editor, was sponsored by ScyllaDB. ‘Where Do We Spend Our Limited Funds?’ Before adding ScyllaDB to its stack, Sader said, “We had a Cassandra-based system that some very smart people put in. But Cassandra relies upon you to have an engineering staff to support it. “That’s great. But like many types of systems, managing Cassandra databases is not really what our business makes money at.” GumGum was hosting its Cassandra database, installed on Amazon Web Services, by itself — and the drain on resources brought the company’s teams to a crossroards, Sader said. “Where do we spend our limited funds? Do we spend it on Cassandra maintenance? Or do we hire someone to do it for us? And that’s really what determined the switch away from a sort of self-installed, self-managed Cassanda to another provider.” A core issue for GumGum, Sader said, was making sure that it wasn’t over-serving consumers, even as they moved around the globe. “If you see an ad in one place, we need to make sure, if you fly across the country, you don’t see it agin,” he said. That’s an issue Cassandra solved for his company, he said. Because ScyllaDB is a drop-in replacement for Apache Cassandra, it also helped prevent over-serving in all regions of the globe — thus preventing GumGum from losing money. In addition to managing its database for GumGum and other customers, Laor said that an advantage ScyllaDB brings is an “always on” guarantee. “We have a big legacy of infrastructure that's supposed to be resilient,” he said. “For example, every implementation of ours has consistent configurable consistency, so you can have multiple replicas.” Laor added, “Many many times organizations have multiple data centers. Sometimes it's for disaster recovery, sometimes it's also to shorten the latency and be closer to the client.” Replica databases located in data centers that are geographically distributed, he said, protect against failure in any one data center. Seeing Results Bringing ScyllaDB to GumGum was not without challenges, both Sader and Laor said. When ScyllaDB is added to an organization’s stack, Laor said, it likes to start with as small a deployment as possible. “But in the GumGum case, all of these clients were new processes,” Laor said. So hundreds or thousands of processes, all trying to connect to the database, it's really a connection storm.” Scylla’s team created a private version of its database to work on the problem and eventually solved it: “We had to massage the algorithm and make sure that all of the [open source] code committers upstream are summing it up.” It ultimately designed an admission control mechanism that measures the amount of parallel requests that the distributed database is handling, and to slow down requests that arrived for the first time from a new process. “We tried to have the complexity on our end,” Laor said. GumGum has seen the results of handing off that complexity and toil to a managed database. “We have pretty much reduced our entire operations effort with Scylla, to almost nothing,” Sader said. He added, “We're coming into our busy point of the year, ads really get picked up in Q4. So we reach out so we go, ‘Hey, we need more nodes in these regions, can you make that happen for us?’ They go, ‘Yep.’ Give us the things, we pay the money. And it happens.” In 2021, Sader said, “we increased our volume by probably 75% plus 50%, over our standard. The toughest thing to do in this industry is make things look easy. And Scylla helped us make ad serving look easy.” Check out the podcast to get more detail about GumGum’s move to a managed database. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.


