

The New Stack Podcast
The New Stack
The New Stack Podcast is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software.
For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack
For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack
Episodes
Mentioned books

Nov 4, 2022 • 16min
Ukraine Has a Bright Future
Ukraine has a bright future. It will soon be time to rebuild. But rebuilding requires more than the resources needed to construct a hydroelectric plant or a hospital. It involves software and an understanding of how to use it. Ihor Dvoretskyi, developer advocate at the Cloud Native Computing Foundation (CNCF), and Dima Zakhalyavko, board member at Razom for Ukraine, came to KubeCon in Detroit to discuss the push to provide training materials for Ukraine as they rebuild from the destruction caused by Russia's invasion. Razom, a nonprofit, amplifies the voices of Ukrainians in the United States and helps with humanitarian efforts and IT training. Razom formed before Russia's 2014 invasion of the Crimean peninsula of Ukraine, Zakhalyavko said. Since the full-scale invasion earlier this year, Razom has had an understandable increase in donations and volunteers helping in their efforts. Individual first aid kits for soldiers, tourniquets, and medics supplies are provided by Razom, but so is IT training, materials to train the next generation of IT, translated into Ukrainian. The Linux Foundation is participating with the Cloud Native Computing Foundation (CNCF) in participation with Razom for Ukraine on its Project Veteranius to provide access to technology education for Ukrainian veterans, their families, and Ukrainians in need. "We've realized that basically, we can benefit from the Linux Foundation training portfolio, including the most popular courses like the intro to Linux, or intro to Kubernetes, that can be pretty much easily translated to Ukrainian," Dvoretskyi said. "And in this way, we'll be able to offer the educational materials in their native language." Ukraine has a pretty bright future. "We just need to get through these difficult times," Dvoretskyi said. "But in the future, it's clear the tech industry in Ukraine is growing. Yeah. And people are needed for that." Every effort matters, Dvoretskyi said. "A strong, democratic Ukraine – that's essentially the vision – a European country, a truly European country, that is whole in terms of territorial integrity," Zakhalyavko said. "The future is in technology. And if we can help enable that – in any case, I think that's a win for Ukraine and the world. Technology can make the world a better place."

Nov 3, 2022 • 16min
Redis is not just a Cache
Redis is not just a cache. It is used in the broader cloud native ecosystem, fits into many service-oriented architectures, and simplifies the deployment and development of modern applications, according to Madelyn Olson, a principal engineer at AWS, during an interview on the New Stack Makers at KubeCon North America in Detroit. Olson said that people have a primary backend database or some other workflow that takes a long time to run. They store the intermediate results in Redis, which provides lower latency and higher throughput. "But there are plenty of other ways you can use Redis," Olson said. "One common way is what I like to call it a data projection API. So you basically take a bunch of different sources of data, maybe a Postgres database, or some other type of Cassandra database, and you project that data into Redis. And then you just pull from the Redis instance. This is a really great, great use case for low latency applications." Redis creator Salvatore Sanfilippo's approach provides a lesson in how to contribute to open source, which Olson recounted in our interview. Olson said he was the only maintainer with write permissions for the project. That meant contributors would have to engage quite a bit to get a response from Sanfilippo. So Olson did what open source contributors do when they want to get noticed. She "chopped wood and carried water," a term that in open source reflects on working to take care of tasks that need attention. That helped Sanfilippo scale himself a bit and helped Olson get involved in the project. It is daunting to get into open source development work, Olson said. A new contributor will face people with a lot more experience and get afraid to open issues. But if a contributor has a use case and helps with documentation or a bug, then most open source maintainers are willing to help. "One big problem throughout open source is, they're usually resource constrained, right?," Olson said. "Open source is oftentimes a lot of volunteers. So they're usually very willing to get more people to help with the project." What's it like now working at AWS on open source projects? Things have changed a lot since Olson joined AWS in 2015, Olson said. APIs were proprietary back in those days. Today, it's almost the opposite of how it used to be. To keep something internal now requires approval, Olson said. Internal differentiation is not needed. For example, open source Redis is most important, with AWS on top as the managed service.

Nov 2, 2022 • 14min
Case Study: How BOK Financial Managed Its Cloud Migration
LOS ANGELES — When you’re deploying a business-critical application to the cloud, it’s nice to not need the “war room” you’ve assembled to troubleshoot Day 1 problems. When BOK Financial, a financial services company that’s been moving apps to the cloud over the last three years, was launching its largest application on the cloud, its engineers supported it with a “war room type situation, monitoring everything” according to BOK’s Andrew Rau. “After the first day, the system just scaled like it was supposed to … and they're like, ‘OK, I guess we don't need this anymore.’” In this On the Road episode of The New Stack’s Makers podcast, Rau, BOK’s vice president and manager, cloud services, offered a case study about his organization’s cloud journey over the past four years, and the role HashiCorp’s Vault and Cloud Platform played in it. Rau spoke to Heather Joslyn, features editor of The New Stack, about the challenges of moving a very traditional organization in a highly regulated industry to the cloud while maintaining tight security and resilience. This episode of Makers, recorded in October at HashiConf in Los Angeles, was and sponsored by HashiCorp.Upskilling for ‘Everything as Code’In late 2019, Rau said, BOK Financial deployed one small application to the cloud, an initial step on its digital transformation journey. It’s been building out its cloud infrastructure ever since, and soon ran into the limits of each cloud provider’s native tooling. “Where we struggled was we didn't want to deploy and manage our clouds in different ways,” he said. “We didn't want our cloud engineers to know just one cloud provider, and their technology and their tech stack. So that's when we really started looking at how else can we do this. And that's when Terraform was a great option for us.” In 2020, BOK Financial began using HashCorp’s open source Terraform to automate the creation of cloud infrastructure. “We made a conscious effort to really focus on automation,” Rau said. “We didn't want to do things manually, which is really that traditional data center, how we've done things for decades. In tandem with adopting Terraform, BOK Financial’s teams began using GitOps processes for CI/CD. But doing “everything as code,” as Rau put it, “required a lot of upskilling for some of our staff, because they've never done version control or automation capabilities. So in addition to learning Terraform, and these other cloud concepts, they had to learn all of that.” The challenge, though, has been worth it: “It's really empowered us to move a lot faster, and give our application teams the ability to deploy at their pace, versus waiting on other teams.”Seeking Automated SecurityIt took about a year, Rau said, to get BOK Financial’s developers comfortable using Terraform, largely because many were new to version control procedures and strategies. Because the company works in a highly regulated industry, handling customers’ financial data, security is of utmost importance. “We had users credentials for our clouds, and we had them separated out based on the type of deployment that [developers] were doing,” said Rau. “But it wasn't easy for us to rotate those credentials on a frequent basis. And so we really felt the need that we want to make these short, limited tokens, no more than an hour for that deployment. And so that's where we looked at Vault.” HashiCorp’s secrets storage and management tool proved an easy add-on with Terraform. “That's really given us the ability to have effectively no credentials — long-lived credentials — out there,” Rau said. “And secure our environment even more.” And because BOK’s teams don’t want to manage Vault and its complexities themselves, it has opted for HashiCorp Cloud Platform to manage it. For other organizations on a cloud native journey, Rau recommended taking time to do things right. “We went back to rework some things periodically, because we learned something too late,” he said. Also, he advised, keep stakeholders in the loop: “You need to stay in front of the communication with business partners, IT leaders, that it's going to take longer to set this up. But once you do, it's incredible.” Check out the podcast to learn more about BOK Financial's cloud native transformation.

7 snips
Nov 1, 2022 • 42min
Devs and Ops: Can This Marriage Be Saved?
DETROIT — Are we still shifting left? Is it realistic to expect developers to take on the burdens of security and infrastructure provisioning, as well as writing their applications? Is platform engineering the answer to saving the DevOps dream? Bottom line: Do Devs and Ops really talk to each other — or just passive-aggressively swap Jira tickets? These are some of the topics explored by a panel, “Devs and Ops People: It’s Time for Some Kubernetes Couples Therapy,” convened by The New Stack at KubeCon + CloudNativeCon North America, here in the Motor City, on Thursday. Panelists included Saad Malik, chief technology officer and co-founder of Spectro Cloud; Viktor Farcic, developer advocate at Upbound; Liz Rice, chief open source officer at Isolalent, and Aeris Stewart, community manager at Humanitec. The latest TNS pancake breakfast was hosted by Alex Williams, The New Stack’s founder and publisher, with Heather Joslyn, TNS features editor, fielding questions from the audience. The event was sponsored by Spectro Cloud. Alleviating Cognitive Load for Devs A big pain point in the DevOps structure — the marriage of frontend and backend in cross-functional teams — is that all devs aren’t necessarily willing or able to take on all the additional responsibilities demanded of them. A lot of organizations have “copy-pasted this one size fits all approach to DevOps,” said Stewart. “If you look at the tooling landscape, it is rapidly growing not just in terms of the volume of tools, but also the complexity of the tools themselves,” they said. “And developers are in parallel expected to take over an increasing amount of the software delivery process. And all of this, together, is too much cognitive load for them.” This situation also has an impact on operations engineers, who must help alleviate developers’ burdens. “It’s causing a lot of inefficiencies of these organizations,” they added, “and a lot of the same inefficiencies that DevOps was supposed to get rid of.” Platform engineering — in which operations engineers provide devs with an internal developer platform that abstracts away some of the complexity — is “a sign of hope,” Stewart said, for organizations for whom DevOps is proving tough to implement. The concept behind DevOps is “about making teams self-sufficient, so they have full control of their application, right from the idea until it is running in production,” said Farcic. But, he added, “you cannot expect them to have 17 years of experience in Kubernetes, and AWS and whatnot. And that's where platforms come in. That's how other teams, who have certain expertise, provide services so that those … developers and operators can actually do the work that they're supposed to do, just as operators today are using services from AWS to do their work. So what AWS for Ops is to Ops, to me, that's what internal developer platforms are to application developers.” Consistency vs. Innovation Platform engineering has been a hot topic in DevOps circles (and at KubeCon) but the definition remains a bit fuzzy, the panelists acknowledged. (“In a lot of organizations, ‘platform engineering’ is just a fancy new way of saying ‘Ops,’” said Rice.) The audience served up questions to the panel about the limits of the DevOps model and how platform engineering fits into that discussion. One audience member asked about balancing the need to provide a consistent platform to an organization’s developers while also allowing devs to customize and innovate. Malik said that both consistency and innovation are possible in a platform engineering structure. “An organization will decide where they want to be able to provide that abstraction,” he said, adding, “When they think about where they want to be as a whole, they could think about, Hey, when we provide our platform, we're going to be providing everything from security to CI/CD from GitHub, from repository management, this is what you will get if you use our IDP or platform itself. But “there are going to be unique use cases,” Malik added, such as developers who are building a new blockchain technology or running WebAssembly. “I think it's okay to give those development teams the ability to run their own platform, as long as you tell them, these are the areas that you have to be responsible for,” he said. “ You're responsible for your own security, your own backup, your own retention capabilities.” One audience member mentioned “Team Topologies,” a 2019 engineering management book by Manuel Pais and Matthew Skelton, and asked the panel if platform engineering is related to DevOps in that it’s more of an approach to engineering management than a destination. “Platform engineering is in the budding stage of its evolution,” said Stewart. “And right now, it's really focused on addressing the problems that organizations ran into when they were implementing DevOps. They added, “I think as we see the community come together more and get more best practices about how to develop platform, you will see it become more than just a different approach to DevOps and become something more distinct. But I don't think it's there quite yet.” Check out the full panel discussion to hear more from our DevOps “counseling session.”

Oct 26, 2022 • 18min
Latest Enhancements to HashiCorp Terraform and Terraform Cloud
What is Terraform?Terraform is HashiCorp’s flagship software. The open source tool provides a way to define IT resources — such as monitoring software or cloud services — in human-readable configuration files. These files, which serve as blueprints, can then be used to automatically provision the systems themselves. Kubernetes deployments, for instance, can be streamlined through Terraform. "Terraform basically translates what your configuration was codified in by your configuration, and provisions it to that desired end state," explained Meghan Liese, [sponsor_inline_mention slug="hashicorp" ]HashiCorp[/sponsor_inline_mention] vice president of product and partner marketing in this podcast and video recording, recorded at the company's user conference, HashiConf 2022, held this month in Los Angeles. For this interview, Liese discusses the latest enhancements to Terraform, and Terraform Cloud, a managed service offering that is part of the HashiCorp Cloud Platform. [Embed Podcast]Why Should Developers be Interested in Terraform?Typically, the DevOps teams, or system administrators, use Terraform to provision infrastructure, but there is also growing interest to allow developers to do it themselves, in a self-service fashion, Liese explained. Multicloud skills are in short supply, concluded the 2022 HashiCorp State of Cloud Strategy Survey, so making the provision process easier could help more developers, the company reckons. A Terraform self-service model, which was introduced earlier this year, could “cut down on the training an organization would need to do to get developers up to speed on using the infrastructure-as-code software,” Liese said. In this “no code” setup, developers can pick from a catalog of no-code-ready modules, which can be deployed directly to workspaces. No need to learn the HCL configuration language. And the administrators will no longer have to answer the same “how-do-I-do-this-in-HCL?” queries. The new console interface aims to greatly expand the use of Terraform. The company has been offering self-service options for a while, by way of an architecture that allows for modules to be reused through the private registry for Terraform Cloud and Terraform Enterprise.What is the Make Code Block and Why is it Important?The recent release of Terraform 1.3 came with the promise to greatly reduce the amount of code HCL jockeys must manage, through the improvement of the make code block. Actually, make has been available since Terraform 1.1, but some kinks were worked out for this latest release. What make does is provide the ability to refactor resources within a Terraform configuration file, moving large code blocks off as separate modules, where they can be discovered through a public or private registry.What is Continuous Validation?With the known state of a system captured on Terraform, it is a short step to check to ensure that the actual running system is identical to the desired state captured in HCL. Many times “drift” can occur, as administrators, or even the apps themselves, make changes to the system. Especially in regulated environments, such as hospitals, it is essential that a system is in a correct state. Earlier this year, HashiCorp added Drift Detection to Terraform Cloud to continuously check infrastructure state to detect changes and provide alerts and offer remediation if that option is chosen. Now, another update, Continuous validation expands these checks to include user assertions, or post-conditions, as well. One post-condition may be something like ensuring that certificates haven’t expired. If they do, the software can offer an alert to the admin to update the certs. Another condition might be to check for new container images, which may have been updated as a response to a security patch.

Oct 20, 2022 • 27min
How ScyllaDB Helped an AdTech Company Focus on Core Business
GumGum is a company whose platform serves up online ads related to the context in which potential customers are already shopping or searching. (For instance: it will send ads for Zurich restaurants to someone who’s booked travel to Switzerland.) To handle that granular targeting, it relies on its proprietary machine learning platform, Verity. “For all of our publishers, we send a list of URLs to Verity,” according to Keith Sader, GumGum’s director of engineering. “Verity goes in and basically categorizes those URLs as different [internal bus] categories. So the IB has tons of taxonomies, based on autos, based upon clothing based upon entertainment. And then that's how we do our targeting.” Verity’s targeting data is stored in DynamoDB, but the rest of GumGum’s data is stored in managed MySQL and its daily tracking data is stored in ScyllaDB, a database designed for data-intensive applications. Scylla, Sader said, helps his company avoid serving audiences the same ads over and over again, by keeping track of which ads customers have already seen. “That’s where Scylla comes into the picture for us,” he said. “Scylla is our rate limiter on ad serving.” In this episode of The New Stack’s Makers podcast, Sader and Dor Laor, CEO and co-founder of Scylla, told how GumGum has used ScyllaDB shift more IT resources to its core business and keep it from repeating ads to audiences that have already seen them, no matter where they travel. This case study episode of Makers, hosted Heather Joslyn, TNS features editor, was sponsored by ScyllaDB. ‘Where Do We Spend Our Limited Funds?’ Before adding ScyllaDB to its stack, Sader said, “We had a Cassandra-based system that some very smart people put in. But Cassandra relies upon you to have an engineering staff to support it. “That’s great. But like many types of systems, managing Cassandra databases is not really what our business makes money at.” GumGum was hosting its Cassandra database, installed on Amazon Web Services, by itself — and the drain on resources brought the company’s teams to a crossroards, Sader said. “Where do we spend our limited funds? Do we spend it on Cassandra maintenance? Or do we hire someone to do it for us? And that’s really what determined the switch away from a sort of self-installed, self-managed Cassanda to another provider.” A core issue for GumGum, Sader said, was making sure that it wasn’t over-serving consumers, even as they moved around the globe. “If you see an ad in one place, we need to make sure, if you fly across the country, you don’t see it agin,” he said. That’s an issue Cassandra solved for his company, he said. Because ScyllaDB is a drop-in replacement for Apache Cassandra, it also helped prevent over-serving in all regions of the globe — thus preventing GumGum from losing money. In addition to managing its database for GumGum and other customers, Laor said that an advantage ScyllaDB brings is an “always on” guarantee. “We have a big legacy of infrastructure that's supposed to be resilient,” he said. “For example, every implementation of ours has consistent configurable consistency, so you can have multiple replicas.” Laor added, “Many many times organizations have multiple data centers. Sometimes it's for disaster recovery, sometimes it's also to shorten the latency and be closer to the client.” Replica databases located in data centers that are geographically distributed, he said, protect against failure in any one data center. Seeing Results Bringing ScyllaDB to GumGum was not without challenges, both Sader and Laor said. When ScyllaDB is added to an organization’s stack, Laor said, it likes to start with as small a deployment as possible. “But in the GumGum case, all of these clients were new processes,” Laor said. So hundreds or thousands of processes, all trying to connect to the database, it's really a connection storm.” Scylla’s team created a private version of its database to work on the problem and eventually solved it: “We had to massage the algorithm and make sure that all of the [open source] code committers upstream are summing it up.” It ultimately designed an admission control mechanism that measures the amount of parallel requests that the distributed database is handling, and to slow down requests that arrived for the first time from a new process. “We tried to have the complexity on our end,” Laor said. GumGum has seen the results of handing off that complexity and toil to a managed database. “We have pretty much reduced our entire operations effort with Scylla, to almost nothing,” Sader said. He added, “We're coming into our busy point of the year, ads really get picked up in Q4. So we reach out so we go, ‘Hey, we need more nodes in these regions, can you make that happen for us?’ They go, ‘Yep.’ Give us the things, we pay the money. And it happens.” In 2021, Sader said, “we increased our volume by probably 75% plus 50%, over our standard. The toughest thing to do in this industry is make things look easy. And Scylla helped us make ad serving look easy.” Check out the podcast to get more detail about GumGum’s move to a managed database.

Oct 19, 2022 • 14min
Terraform's Best Practices and Pitfalls
Wix is a cloud-based development site for making HTML 5 websites and mobile sites with drag and drop tools. It is suited for the beginning user or the advanced developer, said Hila Fish, senior DevOps engineer for Wix, in an interview for The New Stack Makers at HashiCorp’s HashiConf Global conference in Los Angeles earlier this month. Our questions for Fish focused on Terraform, the open source infrastructure-as-code software tool: How has Terraform evolved in uses since Fish started using it in 2018?How does Wix make the most of Terraform to scale its infrastructure?What are some best practices Wix has used with Terraform?What are some pitfalls to avoid with Terraform?What is the approach to scaling across teams and avoiding refactoring to keep the integrations elegant and working Fish started using Terraform in an ad-hoc manner back in 2018. Over time she has learned how to use it for scaling operations. “If you want to scale your infrastructure, you need to use Terraform in a way that will allow you to do that,” Fish said. Terraform can be used ad-hoc to create a machine as a resource, but scale comes with enabling infrastructure that allows the engineers to develop templates that get reused across many servers. “You need to use it in a way that will allow you to scale up as much as you can,” Fish said. Fish said best practices come from how to structure the Terraform code base. Much of it comes down to the teams and how Terraform gets implemented. Engineers each have their way of working. Standard practices can help. In onboarding new teams, a structured code base can be beneficial. New teams onboard and use models already in the code base. And what are some of the pitfalls of using Terraform? We get to that in the recording and more about integrations, why Wix is still on version 0.13, and some new capabilities for developers to use Terraform. Users have historically needed to learn HashiCorp configuration language (HCL) to use the HashiCorp configuration language. At Wix, Fish said, the company is implementing Terraform on the backend with a UI that developers can use without needing to learn HCL.

Oct 18, 2022 • 13min
How Can Open Source Help Fight Climate Change?
DUBLIN — The mission of Linux Foundation Energy — a collaborative, international effort by power companies to help move the world away from fossil fuels — has never seemed more urgent. In addition to the increased frequency and ferocity of extreme weather events like hurricanes and heat waves, the war between Russia and Ukraine has oil-dependent countries looking ahead to a winter of likely energy shortages. “I think we need to go faster,” said Benoît Jeanson, an enterprise architect at RTE, the French electricity transmission system operator. He aded, “What we are doing with the Linux Foundation Energy is really something that will help for the future, and we need to go faster and faster. For this On the Road episode of The New Stack’s Makers podcast, recorded at Open Source Summit Europe here, we were joined by two guests who work in the power industry and whose organizations are part of LF Energy. In addition to Jeanson, this episode featured Jonas van den Bogaard, a solution architect and open source ambassador at Alliander, an energy network company that provides energy transport and distribution to a large part of the Netherlands. Van den Bogaard also serves on the technical advisory council of LF Energy. Heather Joslyn, features editor of TNS, hosted this conversation.18 Open Source ProjectsLF Energy, started in 2018, now includes 59 member organizations, including cloud providers Google and Microsoft, enterprises like General Electric, and research institutions like Stanford University. It currently hosts 18 open source projects; the podcast guests encouraged listeners to check them out and contribute to them. Among them: OpenSTEF, automated machine learning pipelines to deliver accurate forecasts of the load on the energy grid 48 hours ahead of time. “It gives us the opportunity to take action in time to prevent the maximum grid capacity [from being] reached,” said van den Bogaard. “That’s going to prevent blackouts and that sort of thing. And also, another side: it makes us able to add renewable energies to the grid.” Jeanson said that the open source projects aim to cover “every level of the stack. We also have tools that we want to develop at the substation level, in the field.” Among them: OperatorFabric, Written in Java and based on the Spring framework, OperatorFabric is a modular, extensible platform for systems operators, including several features aimed at helping utility operators. It helps operators coordinate the many tasks and alerts they need to keep track of by aggregate notifications from several applications into a single screen. “Energy is of importance for everyone,” said van den Bogaard. “And especially moving to more cleaner and renewable energy is key for us all. We have great minds all around the world. And I really believe that we can achieve that. The best way to do that is to combine the efforts of all those great minds. Open source can be a great enabler of that.”Cultural Education NeededBut persuading decision-makers in the power industry to participate in building the next generation of open source solutions can be a challenge, van den Bogaard acknowledged. “You see, that the energy domain has been there for a long time, and has been quite stable, up to like 10 years ago.” he said. In such a tradition-bound culture, change is hard. In the cloud era, he added, a lot of organizations “need to digitalize and focus more on it and those capabilities are new. And also, open source, for in that matter is also a very new concept.” One obstacle in the energy industry taking more advantage of open source tools, Jeanson noted, is security: “Some organizations still see open source to be a potential risk.” Getting them on board, he said, requires education and training. He added, “vendors need to understand that open source is an opportunity that they should not be afraid of. That we want to do business with them based on open source. We just need to accelerate the momentum. Check out the whole episode to learn more about LF Energy’s work.

Oct 13, 2022 • 27min
KubeCon+CloudNativeCon 2022 Rolls into Detroit
It's that time of the year again, when cloud native enthusiasts and professionals assemble to discuss all things Kubernetes. KubeCon+CloudNativeCon 2023 is being held later this month in Detroit, October 24-28. In this latest edition of The New Stack Makers podcast, we spoke with Priyanka Sharma, general manager of the Cloud Native Computing Foundation — which organizes KubeCon —and CERN computer engineer and KubeCon co-chair Ricardo Rocha. For this show, we discussed what we can expect from the upcoming event. This year, there will be a focus on Kubernetes in the enterprise, Sharma said. "We are reaching a point where Kubernetes is becoming the de facto standard when it comes to container orchestration. And there's a reason for it. It's not just about Kubernetes. Kubernetes spawned the cloud native ecosystem and the heart of the cloud native movement is building fast, resiliently observable software that meets customer needs. So ultimately, it's making you a better provider to your customers, no matter what kind of business you are." Of this year's topics, security will be a big theme, Rocha said. Technologies such as Falco and Cilium will be discussed. Linux kernel add-on eBPF is popping up in a lot of topics, especially around networking. Observability and hybrid deployments also weigh heavily on the agenda. "The number of solutions [around Hybrid] are quite large, so it's interesting to see what people come up with," he said. In addition to KubeCon itself, this year there are a number of co-located events, held during or before the conference itself. Some of them hosted by CNCF while others are hosted by other companies such as Canonical. They include the Network Application Day, BackstageCon, CloudNative eBPF Day, CloudNativeSecurityCon, CloudNative WASM Day, Data-on-Kubernetes Day, EnvoyCon, gRPCConf, KNativeCon, Spinnaker Summit, Open Observability Day, Cloud Native Telco Day, Operator Day, The Continuous Delivery Summit, among others. What's amazing is not only the number of co-located events, but the high quality of talks being held there. "Co-located events are a great way to know what's exciting to folks in the ecosystem right now," Sharma said. "Cloud native has really become the scaffolding of future progress. People want to build on cloud native, but have their own focus areas." WebAssembly (WASM) is a great example of this. "In the beginning, you wouldn't have thought of WebAssembly as part of the cloud native narrative, but here we are," Sharma said. "The same thinking from professionals who conceptualized cloud native in the beginning are now taking it a step further." "There's a lot of value in co-located events, because you get a group of people for a longer period in the same room, focusing on one topic," Rocha said. Other topics discussed in the podcast include the choice of Detroit as a conference hub, the fun activities that CNCF have planned in between the technical sessions, surprises at the keynotes, and so much more! Give it a listen.

Oct 12, 2022 • 17min
Armon Dadgar on HashiCorp's Practitioner Approach
Armon Dadgar and Mitchell Hashimoto are long-time open source practitioners. It's that practitioner focus they established as core to their approach when they started HashiCorp about ten years ago. Today, HashiCorp is a publicly traded company. Before they started HashiCorp, Dadgar and Hashimoto were students at the University of Washington. Through college and afterward, they cut their teeth on open source and learning how to build software in open source. HashiCorp's business is an outgrowth of the two as practitioners in open source communities, said Dadgar, co-founder and CTO of HashiCorp, in an interview at the HashiConf conference in Los Angeles earlier this month. Both of them wanted to recreate the asynchronous collaboration that they loved so much about the open source projects they worked on as practitioners, Dadgar said. They knew that they did not want bureaucracy or a hard-to-follow roadmap. Dadgar cited Terraform as an example of their approach. Terraform is Hashicorp's open-source, infrastructure-as-code, software tool and reflects the company's model to control its core while providing a good user experience. That experience goes beyond community development and into the application architecture itself. "If you're a weekend warrior, and you want to contribute something, you're not gonna go read this massively complicated codebase to understand how it works, just to do an integration," Dadgar said." So instead, we built a very specific integration surface area for Terraform." The integration is about 200 lines of code, Dadgar said. They call the integration their core plus plugin model, with a prescriptive scaffold, examples of how to integrate, and the SDK. Their "golden path" to integration is how the company has developed a program that today has about 2,500 providers. The HashiCorp open source model relies on its core and plugin model. On Twitter, one person asked why doesn't HashiCorp be a proprietary company. Dadgar referred to HashiCorp's open source approach when asked that question in our interview. "Oh, that's an interesting question," Dadgar said. "You know, I think it'd be a much harder, company to scale. And what I mean by that is, if you take a look at like a Terraform community or Vault – there's thousands of contributors. And that's what solves the integration problem. Right? And so if you said, we were proprietary, hey, how many engineers would it take to build 2000 TerraForm integrations? It'd be a whole lot more people that we have today. And so I think fundamentally, what open source helps you solve is the fact that, you know, modern infrastructure has this really wide surface area of integration. And I don't think you can solve that as a proprietary business." "I don't think we'd be able to have nearly the breadth of integration. We could maybe cover the core cloud providers. But you'd have 50 Terraform providers, not 2500 Terraform providers."