

The Business of Open Source
Emily Omier
Whether you're a founder of an open source startup, an open source maintainer or just an open source enthusiast, join host Emily Omier as she talks to the people who work at the intersection of open source and business, from startup founders to leaders of open source giants and all the people who help open source startups grow.
Episodes
Mentioned books

Sep 9, 2020 • 38min
Exploring Single Music’s Cloud Native Journey with Kevin Crawley
The conversation covers: Why Kevin helped launch Single Music, where he currently provides SRE and architect duties.Single Music’s technical evolution from Docker Swarm to Kubernetes, and the key reasons that drove Kevin and his team to make the leap.What’s changed at Single Music since migrating to Kubernetes, and how Kubernetes is opening new doors for the company — increasing stability, and making life easier for developers.How Kubernetes allows Single Music to grow and pivot when needed, and introduce new features and products without spending a large amount of time on backend configurations. How the COVID-19 pandemic has impacted music sales.Single Music’s new plugin system, which empowers their users to create their own middleware.Kevin’s current project, which is a series of how-to manuals and guides for users of Kubernetes.Some common misconceptions about Kubernetes.LinksSingle MusicTraefik LabsTwitter: https://twitter.com/notsureifkevin?lang=enConnect with Kevin on LinkedIn: https://www.linkedin.com/in/notsureifkevinEmily: Hi everyone. I’m Emily Omier, your host, and my day job is helping companies position themselves in the cloud-native ecosystem so that their product’s value is obvious to end-users. I started this podcast because organizations embark on the cloud naive journey for business reasons, but in general, the industry doesn’t talk about them. Instead, we talk a lot about technical reasons. I’m hoping that with this podcast, we focus more on the business goals and business motivations that lead organizations to adopt cloud-native and Kubernetes. I hope you’ll join me.Emily: Welcome to The Business of Cloud Native. I'm Emily Omier, your host, and today I am chatting with Kevin Crawley. And Kevin actually has two jobs that we're going to talk about. Kevin, can you sort of introduce yourself and what your two roles are?Kevin: First, thank you for inviting me on to the show Emily. I appreciate the opportunity to talk a little bit about both my roles because I certainly enjoy doing both jobs. I don't necessarily enjoy the amount of work it gives me, but it also allows me to explore the technical aspects of cloud-native, as well as the business and marketing aspects of it. So, as you mentioned, my name is Kevin Crawley. I work at a company called Containous. They are the company who created Traefik, the cloud-native load balancer. We've also created a couple other projects, and I'll talk a little bit about those later. For Containous, I'm a developer advocate. I work both with the marketing team and the engineering team. But also I moonlight as a co-founder and a co-owner of Single Music. And there, I fulfill mostly SRE type duties and also architect duties where a lot of times people will ask me feedback, and I'll happily share my opinion. And Single Music is actually based out of Nashville, Tennessee, where I live, and I started that with a couple friends here.Emily: Tell me actually a little bit more about why you started Single Music. And what do you do exactly?Kevin: Yeah, absolutely. So, the company started out of really an idea that labels and artists—and these are musicians if you didn't pick up on the name Single Music—we saw an opportunity for those labels and artists to sell their merchandise through a platform called Shopify to have advanced tools around selling music alongside that merchandise. And at the time, which was in 2016, there weren't any tools really to allow independent artists and smaller labels to upload their music to the web and sell it in a way in which could be reported to the Billboard charts, as well as for them to keep their profits. At the time, there was really only Apple Music, or iTunes. And iTunes keeps a significant portion of an artist's revenue, as well as they don't release those funds right away; it takes months for artists to get that money. And we saw an opportunity to make that turnaround time immediate so that the artists would get that revenue almost instantaneously. And also we saw an opportunity to be more affordable as well. So, initially, we offered that Shopify integration—and they call those applications—and that would allow those store owners to distribute that music digitally and have those sales reported in Nielsen SoundScan, and that drives the Billboard Top 100. Now since then, we've expanded quite considerably since the launch. We now report on sales for physical merchandise as well. Things like cassette tapes, and vinyl, so records. And you'd be surprised at how many people actually still buy cassette tapes. I don't know what they're doing with them, but they still do. And we're also moving into the live streaming business now, with all the COVID stuff going on, and there's been some pretty cool events that we've been a part of since we started doing that, and bands have gotten really elaborate with their live production setups and live streaming. To answer the second part of your question, what I do for them, as I mentioned, I mostly serve as an advisor, which is pretty cool because the CTO and the developers on staff, I think there's four or five developers now working on the team, they manage most of the day-to-day operations of the platform, and we have, like, over 150 Kubernetes pods running on an EKS cluster that has roughly, I'd say, 80 cores and 76 gigabytes of RAM. That is around, I'd say about 90 or 100 different services that are running at any given time, and that's across two or three environments, just depending on what we're doing at the time.Emily: Can you tell me a little bit about the sort of technical evolution at Single? Did you start in 2016 on Kubernetes? That's, I suppose, not impossible.Kevin: It's not impossible, and it's something we had considered at the time. But really, in 2016, Kubernetes, I don't even think there wasn't even a managed offering of Kubernetes outside of Google at that time, I believe, and it was still pretty early on in development. If you wanted to run Kubernetes, you were probably going to operate it on-premise, and that just seemed like way too high of a technical burden. At the time, it was just myself and the CTO, the lead developer on the project, and also the marketing or business person who was also part of the company. And at that time, it was just deemed—it was definitely going to solve the problems that we were anticipating having, which was scaling and building that microservice application environment, but at the time, it was impractical for myself to manage Kubernetes on top of managing all the stuff that Taylor, the CTO, had to build to actually make this product a reality. So, initially, we launched on Docker Swarm in my garage, on a Dell R815, which was like a, I think was 64 cores and 256 gigs of RAM, which was, like, overkill, but it was also, I think it cost me, like, $600. I bought it off of Craigslist from somebody here in the area. But it served really well as a server for us to grow into, and it was, for the most part, other than electricity and the internet connection into m...

Sep 2, 2020 • 32min
Navigating the Cloud Native Ecosystem with Harness Evangelist Ravi Lachhman
The conversation covers: An overview of Ravi’s role as an evangelist — an often misunderstood, but important technology enabler. Balancing organizational versus individual needs when making decisions.Some of the core motivations that are driving cloud native migrations today. Why Ravi believes it in empowering engineers to make business decisions. Some of the top misconceptions about cloud native. Ravi also provides his own definition of cloud native.How cloud native architectures are forcing developers to “shift left.”Linkshttps://harness.io/Twitter: https://twitter.com/ravilachHarness community: https://community.harness.io/Harness Slack: https://harnesscommunity.slack.com/TranscriptEmily: Hi everyone. I’m Emily Omier, your host, and my day job is helping companies position themselves in the cloud-native ecosystem so that their product’s value is obvious to end-users. I started this podcast because organizations embark on the cloud naive journey for business reasons, but in general, the industry doesn’t talk about them. Instead, we talk a lot about technical reasons. I’m hoping that with this podcast, we focus more on the business goals and business motivations that lead organizations to adopt cloud-native and Kubernetes. I hope you’ll join me.Welcome to The Business of Cloud Native, I am your host Emily Omier. And today I'm chatting with Ravi Lachhman. Ravi, I want to always start out with, first of all, saying thank you—Ravi: Sure, excited to be here.Emily: —and second of all, I like to have you introduce yourself, in your own words. What do you do? Where do you work?Ravi: Yes, sure. I'm an evangelist for Harness. So, what an evangelist does, I focus on the ecosystem, and I always like the joke, I marry people with software because when people think of evangelists, they think of a televangelist. Or at least that’s what I told my mother and she believes me still. I focus on the ecosystem Harness plays in. And so, Harness is a continuous delivery as a service company. So, what that means, all of the confidence-building steps that you need to get software into production, such as approvals, test orchestration, Harness, how to do that with lots of convention, and as a service.Emily: So, when you start your day, walk me through what you're actually doing on a typical day?Ravi: a typical day—dude, I wish there was a typical day because we wear so many hats as a start-up here, but kind of a typical day for me and a typical day for my team, I ended up reading a lot. I probably read about two hours a day, at least during the business day. Now, for some people that might not be a lot, but for me, that's a lot. So, I'll usually catch up with a lot of technology news and news in general. They kind of see how certain things are playing out. So, a big fan of The New Stack big fan of InfoQ. I also like reading Hacker News for more emotional reading. The big orange angry site, I call Hacker News. And then really just interacting with the community and teams at large. So, I'm the person I used to make fun of, you know, quote-unquote, “thought leader.” I used to not understand what they do, then I became one that was like, “Oh, boy.” [laughs]. And so just providing guidance for some of our field teams, some of the marketing teams around the cloud-native ecosystem, what I'm seeing, what I'm hearing, my opinion on it. And that's pretty much it. And I get to do fun stuff like this, talking on podcasts, always excited to talk to folks and talk to the public. And then kind of just a mix of, say, making some sort of demos, or writing scaffolding code, just exploring new technologies. I'm pretty fortunate in my day to day activities.Emily: And tell me a little bit more about marrying people with software. Are you the matchmaker? Are you the priest, what role?Ravi: I can play all parts of the marrying lifecycle. Sometimes I'm the groom, sometimes I’m the priest. But I'm really helping folks make technical decisions. So, it’s go a joke because I get the opportunity to take a look at a wide swath of technology. And so just helping folks make technical decisions. Oh, is this new technology hot? Does this technology make sense? Does this project fatality? What do you think? I just play, kind of, masters of ceremony on folks who are making technology decisions.Emily: What are some common decisions that you help people with, and common questions that they have?Ravi: Lot of times it comes around common questions about technology. It's always finding rationale. Why are you leveraging a certain piece of technology? The ‘why’ question is always important. Let's say that you're a forward-thinking engineer or a forward-thinking technology leader. They also read a lot, and so if they come across, let's say a new hot technology, or if they're on Twitter, seeing, yeah, this particular project’s getting a lot of retweets, or they go in GitHub and see oh, this project has little stars, or forks. What does that mean? So, part of my role when talking to people is actually to kind of help slow that roll down, saying, “Hey, what’s the business rationale behind you making a change? Why do you actually want to go about leveraging a certain, let's say, technology?” I’m just taking more of a generic approach, saying, “Hey, what’s the shiny penny today might not be the shiny penny tomorrow.” And also just providing some sort of guidance like, “Hey, let's take a look at project vitality. Let's take a look at some other metrics that projects have, like defect close ratio—you know, how often it's updates happening, what's your security posture?” And so just walking through a more, I would say the non-fun tasks or non-functional tasks, and also looking about how to operationalize something like, “Hey, given you want to make sure you're maintaining innovation, and making sure that you're maintaining business controls, what are some best operational practices?” You know, want to go for gold, or don't boil the ocean, it’s helping people make decisive decisions.Emily: What do you see as sort of the common threads that connect to the conversations that you have?Ravi: Yeah, so I think a lot of the common threads are usually like people say, “Oh, we have to have it. We're going to fall behind if you don't use XYZ technology.” And when you really start getting to talking to them, it's like, let’s try to line up some sort of technical debt or business problem that you have, and how about are you going to solve these particular technical challenges? It's something that, of the space I play into, which is ironic, it's the double-edged sword, I call it ‘chasing conference tech.’ So, sometimes people see a really hot project, if my team implements this, I can go speak at a conference about a certain piece of technology. And it's like, eh, is that a really r...

Aug 26, 2020 • 28min
Simplifying Cloud Native Testing with Jón Eðvald
The conversation covers:Some of the pain points and driving factors that led Jón and his partners to launch Garden. Jon also talks about his early engineering experiences prior to Garden.How the developer experience can impact the overall productivity of a company, and why companies should try and optimize it.Kubernetes shortcomings, and the challenges that developers often face when working with it. Jón also talks about the Kubernetes skills gap, and how Garden helps to close that gap. Business stakeholder perception regarding Kuberentes challenges. The challenge of deploying a single service on Kubernetes in a secure manner — and why Jón was surprised by this process. How the Kubernetes ecosystem has grown, and the benefits of working with a large community of people who are committed to improving it. Jón’s multi-faceted role as CEO of Garden, and what his day typically entails as a developer, producer, and liaison. Garden’s main mission, which involves streamlining end-to-end application testing. Links:Company site: https://garden.io/Twitter: https://twitter.com/jonedvaldKubernetes Slack: https://slack.k8s.io/Transcript:Emily: Hi everyone. I’m Emily Omier, your host, and my day job is helping companies position themselves in the cloud-native ecosystem so that their product’s value is obvious to end-users. I started this podcast because organizations embark on the cloud naive journey for business reasons, but in general, the industry doesn’t talk about them. Instead, we talk a lot about technical reasons. I’m hoping that with this podcast, we focus more on the business goals and business motivations that lead organizations to adopt cloud-native and Kubernetes. I hope you’ll join me.Emily: Welcome to The Business of Cloud Native. I'm your host Emily Omier. And today I'm chatting with Jón Eðvald. And, Jón, thank you so much for joining me.Jón: Thank you so much for having me. You got the name pretty spot on. Kudos.Emily: Woohoo, I try. So, if you could actually just start by introducing yourself and where you work in Garden, that would be great.Jón: Sure. So, yeah, my name is Jón, one of the founders, and I’m the CEO of Garden. I've been doing software engineering for more years than I'd like to count, but Garden is my second startup. Previous company was some years ago; dropped out of Uni to start what became a natural language processing company. So, different sort of thing than what I'm doing now. But it's actually interesting just to scan through the history of how we used to do things compared to today. We ran servers out of basically a cupboard with a fan in it, back in the day, and now, things are done somewhat differently. So, yeah, I moved to Berlin, it's about four years ago now, met my current co-founders. We all shared a passion and, I guess to some degree, frustrations about the general developer experience around, I guess, distributed systems in general. And now it's become a lot about Kubernetes these days in the cloud-native world, but we are interested in addressing common developer headaches regarding all things microservices. Testing, in particular, has become a big part of our focus. Garden itself is an open-source product that aims to ease the developer experience around Kubernetes, again, with an emphasis on testing. When we started it, there wasn't a lot of these types of tools around, or they were pretty early on. Now there's a whole bunch of them, so we're trying to fit into this broad ecosystem. Happy to expand on that journey. But yeah, that's roughly—that's what Garden is, and that’s… yeah, a few hop-skips of my history as well.Emily: So, tell me a little bit more about the frustration that led you to start Garden. What were you doing, and what were you having trouble doing, basically?Jón: So, when I first moved to Berlin, it was to work for a company called Clue. They make a popular period tracking app. So, initially, I was meant to focus on the data science and data engineering side of things, but it became apparent that there was a lot of need for people on the engineering side as well. So, I gravitated into that and ended up managing the engineering team there. And it was a small operation. We had more than a million daily active users yet just a single back end developer, so it was bursting at the seams. And at the time running a simple Node.js backend on Heroku, single Postgres database, pretty simple. And I took that through—first, we adopted containers and moved into Docker Cloud. Then Docker Cloud disappeared, or was terminated without—we had to discover that by ourselves. And then Kubernetes was manifesting as the de facto way to do these things. So, we went through that transition, and I was kind of surprised. It was easy enough to get going and get to a functional level with Kubernetes and get everything running and working. The frustration came more from just the general developer experience and developer productivity side. Specifically, we found it very difficult to test the whole application because we had, by the end of that journey, a few different services doing different things. And for just the time you make a simple change to your code to it actually having been built, deployed, and ultimately tested was a rather tedious experience. And I found myself building tools, bespoke tools to be able to deal with that, and that ended up being sort of a janky prototype of what Garden is today. And I realized that my passion was getting the better of me, and we wanted to start a company to try and do better.Emily: Why do you think developer experience matters?Jón: Beyond just the, kind of, psychological effect of having to have these long and tedious feedback loops—just as a developer myself, it kind of grinds and reduces the overall joy of working on something. But in more concrete material terms, it really limits your productivity. You basically, you take—if your feedback loop is 10 times longer than it should be, that exponentially reduces the overall output of you as an individual or your team. So, it has a pretty significant impact on just the overall productivity of a company.Emily: And, in fact, it seems like a lot of companies move to Kubernetes or adopt distributed systems, cloud-native in general, precisely to get the speed.Jón: And, yeah, that makes sense. I think it's easy to underestimate all the, what are often called these day-two problems, when—so, it's easy enough to grok how you might adopt Kubernetes. You might get the application working, and you even get to production fairly quickly, and then you find that you've left a lot of problems unsolved, that Kubernetes by itself doesn't really address for you. And it's often conflated by the fact that you may be actually adopting multiple things at the same time. You may be not only transitioning to Kubernetes from something analogous, you may be going from simpler, bespoke processes, or you might have just a monolith that didn't really have any complicated requirements when it comes to dev tooling and dev setups. So, yeah, you might be adopting microservices, containers, and Kuberne...

Aug 19, 2020 • 34min
CERN’s Transition to Containerization and Kubernetes with Ricardo Rocha
Some of the highlights of the show include: The challenges that CERN was facing when storing, processing, and analyzing data, and why it pushed them to think about containerization. CERN’s evolution from using mainframes, to physical commodity hardware, to virtualization and private clouds, and eventually to containers. Ricardo also explains how the migration to containerization and Kubernetes was started.Why there was a big push from groups that focus on reproducibility to explore containerization. How end users have responded to Kubernetes and containers. Ricardo talks about the steep Kubernetes learning curve, and how they dealt with frustration and resistance. Some of top benefits of migrating to Kubernetes, and the impact that the move has had on their end users. Current challenges that CERN is working through, regarding hybrid infrastructure and rising data loads. Ricardo also talks about how CERN optimizes system resources for their scientists, and what it’s like operating as a public sector organization.How CERN handles large data transfers. Links:Email:ricardo.rocha@cern.ch Twitter: https://twitter.com/ahcorportoCERNTranscriptEmily: Hi everyone. I’m Emily Omier, your host, and my day job is helping companies position themselves in the cloud-native ecosystem so that their product’s value is obvious to end-users. I started this podcast because organizations embark on the cloud naive journey for business reasons, but in general, the industry doesn’t talk about them. Instead, we talk a lot about technical reasons. I’m hoping that with this podcast, we focus more on the business goals and business motivations that lead organizations to adopt cloud-native and Kubernetes. I hope you’ll join me.Emily: Welcome to the Business of Cloud Native. I'm your host, Emily Omier, and today I'm here with Ricardo Rocha. Ricardo, thank you so much for joining us.Ricardo: It's a pleasure.Emily: Ricardo, can you actually go ahead and introduce yourself: where you work, and what you do?Ricardo: Yeah, yes, sure. I work at CERN, the European Organization for Nuclear Research. I'm a software engineer and I work in the CERN IT department. I've done quite a few different things in the past in the organization, including software development in the areas of storage and monitoring, and also distributed computing. But right now, I'm part of the CERN Cloud Team, and we manage the CERN private cloud and all the resources we have. And I focus mostly on networking and containerization, so Kubernetes and all these new technologies.Emily: And on a day to day basis, what do you usually do? What sort of activities are you actually doing?Ricardo: Yeah. So, it's mostly making sure we provide the infrastructure that our physics users and experiments require, and also the people on campus. So, CERN is a pretty large organization. We have around 10,000 people on-site, and many more around the world that depend on our resources. So, we operate private clouds, we basically do DevOps-style work. And we have a team dedicated for the Cloud, but also for other areas of the data center. And it's mostly making sure everything operates correctly; try to automate more and more, so we do some improvements gradually; and then giving support to our users.Emily: Just so everyone knows, can you tell a little bit more about what kind of work is done at CERN? What kind of experiments people are running?Ricardo: Our main goal is fundamental research. So, we try to answer some questions about the universe. So, what's dark matter? What's dark energy? Why don't we see antimatter? And similar questions. And for that, we build very large experiments. So, the biggest experiment we have, which is actually the biggest scientific experiment ever built, is the Large Hadron Collider, and this is a particle accelerator that accelerates two beams of protons in opposite directions, and we make them collide at very specific points where we build this very large physics experiments that try to understand what happens in these collisions and try to look for new physics. And in reality, what happens with these collisions is that we generate large amounts of data that need to be stored, and processed, and analyzed, so the IT infrastructure that we support, it’s larger fraction dedicated to this physics analysis.Emily: Tell me a little bit more about some of the challenges related to processing and storing the huge amount of data that you have. And also, how this has evolved, and how it pushed you to think about containerization.Ricardo: The big challenge we have is the amount of data that we have to support. So, these experiments, each of the experiments, at the moment of the collisions, it can generate data in the order of one petabyte a second. This is, of course, not something we can handle, so the first thing we do, we use these hardware triggers to filter this data quite significantly, but we still generate, per experiment, something like a few gigabytes a second, so up to 10 gigabytes a second. And this we have to store, and then we have large farms that will handle the processing and the reconstruction of all of this. So, we've had these sort of experiments since quite a while, and to analyze all of this, we need a large amount of resources, and with time. If you come and visit CERN, you can see a bit of the history of computing, kind of evolving with what we used to have in the past in our data center. But it's mostly—we used to have large mainframes, that now it's more in the movies that we see them, but we used to have quite a few of those. And then we transitioned to physical commodity hardware with Linux servers. Eventually introduced virtualization and private clouds to improve the efficiency and the provisioning of these resources to our users, and then eventually, we moved to containers and the main motivation is always to try to be as efficient as possible, and to speed up this process of provisioning resources, and be more flexible in the way we assign compute and also storage. What we've seen is that in the move from physical to virtualization, we saw that the provisioning and maintenance got significantly improved. What we see with containerization is the extra speed in also deployment and update of the applications that run on those resources. And we also see an improving resource utilization. We already had the possibility to improve quite a bit with virtualization by doing things like overcommit, but with containers, we can go one step further by doing more efficient resource sharing for the different applications we have to run.Emily: Is the amount of data that you're processing stable? Is it steadily increasing, have spikes, a combination?Ricardo: So, the way it works is, we have what we call ‘beam’ which is when we actually have protons circulating in the accelerator. And during these periods, we try to get as much collisions as ...

Aug 12, 2020 • 25min
Discussing the Latest Cloud Trends with Cloud Comrade Co-founder Andy Waroma
Highlights from this episode include: Key market drivers that are causing Cloud Comrade’s clients to containerize applications — including the role that the global pandemic is playing. The pitfalls of approaching cloud migration with a cost-first strategy, and why Andy doesn’t believe in this approach. Common misconceptions that can arise when comparing cloud TCO to on-premise infrastructure.How today’s enterprises tend to view cloud computing versus cloud-native. Andy also mentions a key requirement that companies have to have when integrating cloud services.Andy’s thoughts on build versus buy when integrating cloud services at the enterprise level.Why cloud migration is a relatively safe undertaking for companies because it’s easy to correct mistakes.Why businesses need to re-think AI and to be more realistic in terms of what can actually be automated. Andy’s must-have engineering tool, which may surprise you.Links:Cloud Comrade LinkedIn: https://www.linkedin.com/company/cloud-comrade/Follow Andy on Twitter: @andywaromaConnect with Andy on LinkedIn: https://www.linkedin.com/in/andyw/TranscriptEmily: Hi everyone. I’m Emily Omier, your host, and my day job is helping companies position themselves in the cloud-native ecosystem so that their product’s value is obvious to end-users. I started this podcast because organizations embark on the cloud naive journey for business reasons, but in general, the industry doesn’t talk about them. Instead, we talk a lot about technical reasons. I’m hoping that with this podcast, we focus more on the business goals and business motivations that lead organizations to adopt cloud-native and Kubernetes. I hope you’ll join me.Emily: Welcome to The Business of Cloud Native. I'm Emily Omier, your host, and today I'm here with Andy Waroma. Andy, I just wanted to start with having you introduce yourself.Andy: Yeah, hi. Thanks, Emily for having me on your podcast. My name is Andy Waroma, and I'm based in Singapore, but originally from Finland. I've been [unintelligible] in Singapore for about 20 years, and for 11 years I spent with a company called SAP focusing on business software applications. And then more recently, about six years ago, I co-founded together with my ex-colleague from SAP, a company called Cloud Comrade, and we have been running Cloud Comrade now for six years and Cloud Comrade focuses on two things: number one, on cloud migrations; and number two, on cloud managed services across the Southeast Asia region.Emily: What kind of things do you help companies understand when you're helping with cloud migrations? Is this like, like, a lift and shift? To what extent are you helping them change the architecture of their applications?Andy: Good question. So, typically, if you look at the Southeast Asian market, we are probably anywhere between one to two years behind that of the US market. And I always like to say that the benefit that we have in Southeast Asia is that we have a time machine at our disposal. So, whatever has happened in the US in the past 18 months or so it's going to be happening also in Singapore and Southeast Asia. And for the first three to four years of this business, we saw a lot of lift and shift migrations, but more recently, we have been asked to go and containerize applications to microservices, revamp applications from monolithic approach to a much more flexible and cloud-native approach, and we just see those requirements increasing as companies understand what kind of innovation they can do on different cloud platforms.Emily: And what do you think is driving, for your clients, this desire to containerize applications?Andy: Well, if you asked me three months ago, I probably would have said it's about innovation, and business advantage, and getting ahead in the market, and investing in the future. Now, with the global pandemic situation, I would say that most companies are looking at two things: they're looking at cost savings, and they are also looking at automation. And I think cost savings is quite obvious; most companies need to know how they can reduce on their IT expenditure, how they can move from CAPEX to OPEX, how they can be targeting their resources up and down depending on the business demand what they have. And at the same time, they're also not looking to hire a lot of new people into their internal IT organization. So, therefore, most of our customers want to see their applications to be as automated as possible. And of course, microservices, CI/CD pipelines, and everything else helps them to achieve that somewhat. But first and foremost, of course, it's about all services that Cloud provides in general. And then once they have been moving some of those applications and getting positive experiences, that's where we typically see the phase two kicking in, going into cloud-native microservices, containers, Kubernetes, Docker, and so forth.Emily: And do you think when companies are going into this, thinking, “Oh, I'm going to really reduce my costs.” Do you think they're generally successful?Andy: I don't think in a way that they think they are. So, especially if I'm looking at the Southeast Asian markets: Singapore, Malaysia, Thailand, Philippines, Indonesia, and perhaps other countries like Vietnam, Myanmar, and Cambodia, it’s a very cost-conscious market, and I always, also like to say that when we go into a meeting, the first question that we get from the customers, “How much?” It is not even what are we going to be delivering, but how much it's going to cost them. That's the first gate of assessment. So, it's very much of an on-premise versus clouds comparison in the beginning.And I think if companies go in with that type of a mindset, that's not necessarily the winning strategy for them. What they will come to know after a while is that, for example, setting up disaster recovery systems on an on-premise environment, especially when a separate location is extremely expensive, and doing something like that on the Cloud is going to be very cost-efficient. And that's when they start seeing cost savings. But typically, what they will start seeing on Cloud is a process cost-saving, so how they can do things faster, quicker, and be more flexible in terms of responding to end-user demands.Emily: At the beginning of the process, how much do you think your customers generally understand about how different the cost structure is going to be?Andy: So, we have more than 200 customers, and we have done more than 500 projects over the six years, and there's a vast range of customers. We have done work with companies with a few people; we have done companies with Fortune 10 organizations, and everything in between, in all kinds of different industries: manufacturing, finance, insurance, public sector, industrial level things, nonprofits, research organizations. So, we can't really say that each customer are same. There are customers who are very sophisticated and they know exactly what they want when going to a cloud platform, but then there are, of course, many other customers who need to be advised much more in the beginning, and that’s where we typically...

Aug 5, 2020 • 38min
RVU’s Cloud Native Transformation with Paul Ingles
Some highlights of the show include:The company’s cloud native journey, which accelerated with the acquisition of Uswitch. How the company assessed risk prior to their migration, and why they ultimately decided the task was worth the gamble.Uswitch’s transformation into a profitable company resulting from their cloud native migration.The role that multidisciplinary, collaborative teams played in solving problems and moving projects forward. Paul also offers commentary on some of the tensions that resulted between different teams.Key influencing factors that caused the company to adopt containerization and Kubernetes. Paul goes into detail about their migration to Kubernetes, and the problems that it addressed. Paul’s thoughts on management and prioritization as CTO. He also explains his favorite engineering tool, which may come as a surprise. Links:RVU Website: https://www.rvu.co.uk/Uswitch Website: https://www.uswitch.com/Twitter: https://twitter.com/pinglesGitHub: https://github.com/pinglesTranscriptAnnouncer: Welcome to The Business of Cloud Native podcast, where we explore how end users talk and think about the transition to Kubernetes and cloud-native architectures.Emily: Welcome to The Business of Cloud Native. I'm your host, Emily Omier, and today I am chatting with Paul Ingles. Paul, thank you so much for joining me.Paul: Thank you for having me.Emily: Could you just introduce yourself: where do you work? What do you do? And include, sort of, some specifics. We all have a job title, but it doesn't always reflect what our actual day-to-day is.Paul: I am the CTO at a company called RVU in London. We run a couple of reasonably big-ish price comparison, aggregator type sites. So, we help consumers figure out and compare prices on broadband products, mobile phones, energy—so in the UK, energy is something which is provided through a bunch of different private companies, so you've got a fair amount of choice on kind of that thing. So, we tried to make it easier and simpler for people to make better decisions on the household choices that they have. I've been there for about 10 years, so I've had a few different roles. So, as CTO now, I sit on the exec team and try to help inform the business and technology strategy. But I've come through a bunch of teams. So, I've worked on some of the early energy price comparison stuff, some data infrastructure work a while ago, and then some underlying DevOps type automation and Kubernetes work a couple of years ago.Emily: So, when you get in to work in the morning, what types of things are usually on your plate?Paul: So, I keep a journal. I use bullet journalling quite extensively. So, I try to track everything that I’ve got to keep on top of. Generally, what I would try to do each day is catch up with anybody that I specifically need to follow up with. So, at the start of the week, I make a list of every day, and then I also keep a separate column for just general priorities. So, things that are particularly important for the week, themes of work going on, like, technology changes, or things that we're trying to launch, et cetera. And then I will prioritize speaking to people based on those things. So, I'll try and make sure that I'm focusing on the most important thing. I do a weekly meeting with the team. So, we have a few directors that look after different aspects of the business, and so we do a weekly meeting to just run through everything that's going on and sharing the problems. We use the three P's model: so, sharing progress problems and plans. And we use that to try and steer on what we do. And we also look at some other team health metrics. Yeah, it's interesting actually. I think when I switched from working in one of the teams to being in the CTO role, things change quite substantially. That list of things that I had to care about increase hugely, to the point where it far exceeded how much time I had to spend on anything. So, nowadays, I find that I'm much more likely for some things to drop off. And so it's unfortunate, and you can't please everybody, so you just have to say, “I'm really sorry, but this thing is not high on the list of priorities, so I can't spend any time on it this week, but if it's still a problem in a couple of weeks time, then we'll come back to it.” But yeah, it can vary quite a lot.Emily: Hmm, interesting. I might ask you more questions about that later. For now, let's sort of dive into the cloud-native journey. What made RVU decide that containerization was a good idea and that Kubernetes was a good idea? What were the motivations and who was pushing for it?Paul: That's a really good question. So, I got involved about 10 years ago. So, I worked for a search marketing startup that was in London called Forward Internet Group, and they acquired USwitch in 2010. And prior to working at Forward, I'd worked as a consultant at ThoughtWorks in London, so I spent a lot of time working in banks on continuous delivery and things like that. And so when Uswitch came along, there were a few issues around the software release process. Although there was a ton of automation, it was still quite slow to actually get releases out. We were only doing a release every fortnight. And we also had a few issues with the scalability of data. So, it was a monolithic Windows Microsoft stack. So, there was SQL Server databases, and .NET app servers, and things like that. And our traffic can be quite spiky, so when companies are in the news, or there's policy changes and things like that, we would suddenly get an increase in traffic, and the Microsoft solution would just generally kind of fall apart as soon as we hit some kind of threshold. So, I got involved, partly to try and improve some of the automation and release practices because at the search start-up, we were releasing experiments every couple of hours, even. And so we wanted to try and take a bit of that ethos over to Uswitch, and also to try and solve some of the data scalability and system scalability problems. And when we got started doing that, a lot of it was—so that was in the early heyday of AWS, so this was about 2008, that I was at the search startup. And we were used to using EC2 to try and spin up Hadoop clusters and a few other bits and pieces that we were playing around with. And when we acquired Uswitch, we felt like it was quickest for us to just create a different environment, stick it under the load balancer so end users wouldn't realize that some requests was being served off of the AWS infrastructure instead, and then just gradually go from there. We found that that was just the fastest way to move. So, I think it was interesting, and it was both a deliberate move, but it was also I think the degree to which we followed through on it, I don't think we'd really anticipated quite how quickly we would shift everything. And so when Forward made the acquisition, I joined summer of 2010, and myself and a colleague wrote ...

Jul 29, 2020 • 27min
Vodafone’s Cloud Native Journey with Tom Kivlin
Some of the highlights include: Why Vodafone moved to a cloud native architecture. As Tom explains, the company was struggling to manage operations across more than 20 markets. They also needed to improve the customer experience, and foster customer loyalty. Why their business and engineering teams were both in favor of cloud native.The benefits of deploying daily operational activities around a single cloud native platform. An overview of where Vodavone currently is in their overall cloud native journey. Tom also explains how cloud native conversations have changed inside of the company throughout their journey, as various business units have caught on to the benefits of the cloud.Vodafaone’s transition from outsourcing roughly 97 percent of their operations, to bringing 95 percent in house. Tom explains how this has improved efficiency and expedited time to market.The challenge that Vodafone faced in trying to apply legacy network security solutions to distributed and dynamic systems. Tom’s thoughts on why Vodafone’s cloud native transition and modernization efforts have been crucial to their success over the last five years. Links:Vodafone Group: https://www.vodafone.com/Connect with Tom on LinkedIn: https://uk.linkedin.com/in/tom-kivlin-5b469321The Business of Cloud Native: http://thebusinessofcloudnative.com Tom’s Twitter: https://twitter.com/tomkivlinCNCF GitHub: https://github.com/cncfCNCF Slack: https://slack.cncf.io/Kubernetes Slack: http://slack.kubernetes.io/TranscriptAnnouncer: Welcome to The Business of Cloud Native podcast, where we explore how end users talk and think about the transition to Kubernetes and cloud-native architectures.Emily: Welcome to The Business of Cloud Native. I'm Emily Omier, your host, and today I am chatting with Tom Kivlin. Tom, thank you so much for joining us.Tom: You're welcome. No problem.Emily: Let's just start out with having you introduce yourself. What do you do? Where do you work, and what do you actually do during your workday?Tom: Sure. So, I'm a principal cloud orchestration architect at Vodafone Group. I work in the UK. And my day job consists of providing guidance and strategy and architectural blueprints for cloud-native platforms within Vodafone. So, that's around providing guidance to the software domains that are looking to adopt cloud-native architectures and methodologies and also to the more traditional infrastructure domains to try and help them provide their services in a more cloud-native manner to those modern teams.Emily: And what does that mean when you go into the office—or your home office, go into your dining room where your laptop is, I don't know—what do you actually do? What does an average day look like?Tom: It can vary. So, depending on the activity at the time, it could be anything from preparing a global policy that needs to go through the senior technology leadership team, to preparing some extremely detailed requirements for selection process or creating some infrastructures code, or the code artifacts for the deployment of cloud-native services, whether that's in our lab, or to help our services teams within Vodafone.Emily: Tell me a little bit more about what pain made Vodafone think about moving to cloud-native and Kubernetes.Tom: Primarily, it was the challenge of having 25 different markets, or 23 now. We launched a digital strategy to—so back in 2015, we launched a five-year strategy, which we wanted to massively increase the rollout of 4G, of converged network offerings, of improved customer experience. And we found that the traditional way of managing software was not supportive enough in our ambition. And so, having to choose cloud-native technologies, things like Kubernetes, but also the modern operating models, that was the driver: it was to improve our customer experience, and our customer-affecting KPIs, really.Emily: And when you say it wasn't supportive enough, what do you mean specifically?Tom: So, things like time to market, for example. So, if we wanted to offer a new service—so one of the things that 4G started the drive towards was a more granulated service offering to consumers, and so lots of different things could be offered. And if it took you six months to think of an idea and then have to go through—or even longer than six months to get to the point where that could be offered to customers, even if it was just a very minor feature within an existing product, then that's not going to engender customer loyalty. And so, things like the cloud-native mindset, where there's a much closer link between the engineering teams and the customer, there are much shorter periods of time between ideas coming in from the customers and then being delivered back to the customers as product features, that sort of time to market was really enabled by cloud-native technologies and mindsets.Emily: And how does having two dozen, more or less, different markets, how does that play into the decision A) to move to cloud-native in general, and managing the IT infrastructure?Tom: So, one of the things that's really driven it is trying to simplify and reuse artifacts. So, if you've got 23 markets all doing a different thing, then there's obviously a lot of duplication happening across the group, whereas if everyone's using the same technology in the same platforms—take Kubernetes as the example—everyone can write their software for that platform. Everyone can write their operational ecosystem around that platform. So, the deployment artifacts, the pipelines, the day two operational activities, they can all be based around that single cloud-native platform. And so, that enables a huge amount of efficiency from the operational side. And that in turn allows those engineering teams to focus on things that are adding value to the business and the customer instead of having to focus on fairly low-level tasks that are just keeping the lights on, if you like.Emily: What's different for each one of those markets?Tom: So, it might be something like language, it might be something as simple as that. It may be that the offerings are slightly tweaked. So, rather than, I don't know, as an example, rather than Spotify being included as a kind of add on, it might be some other service that's more relevant to that market. It may be that there are particular regulatory requirements that are specific to a market that needs to be considered within the product design and the engineering of it. And so, having a cloud-native response allows sharing and reuse of artifacts where we can, but still allows for that customization where it's required.Emily: Where would you say Vodafone is in the cloud-native journey? Do you feel like you've, mission accomplished?<...

Jul 22, 2020 • 35min
Cloud Costs: A Conversation with Travis Rehl
This conversation covers: Why many businesses are shifting away from analyzing total cloud spend (CapEX vs. OpEX) and are now forecasting spend based around usage patterns.The difference between cloud-native, cloud computing, and operating in the cloud. The delta that often exists between engineering teams and business stakeholders regarding costs. Travis also offers tips for aligning both parties earlier in the project lifecycle.Common misconceptions that exist around cost management, for both engineers and business stakeholders. For example, Travis talks about how engineers often assume that business teams manage purely to dollars and cents, when they are often very open to extending budgets when it’s necessary.Tips for predicting cloud spend, and why teams usually fall short in their projections.Why conducting cloud cost management too early in a project can be detrimental. Comparing the cost of the cloud to a private data center. The growing reliance on multi-cloud among large enterprises. Travis also explains why it’s important to have the right processes in place, to identify cross-cloud saving opportunities. How IT has transitioned from a business enabler to a business driver in recent years, and is now arguably the most important component for the average company.Links:Twitter: https://twitter.com/TravisWRehlLinkedIn: https://www.linkedin.com/in/travis-rehl-tech/Main Company Site: https://cloudcheckr.com CloudCheckr All Stars: https://cloudchecker.com/allstars TranscriptAnnouncer: Welcome to The Business of cloud-native podcast, where we explore how end users talk and think about the transition to Kubernetes and cloud-native architectures.Emily: Welcome to the Business of cloud-native. I'm your host, Emily Omier, and I'm here today with Travis Rehl, who is the director of product at CloudCheckr. Travis, I just wanted to start out, first of all, by saying thank you for joining me on the show. And second of all, if you could just start off by introducing yourself. What you do, and by that I mean, what does an actual day look like? And some of your background?Travis: Yeah. Well, thanks for having me. So yeah, I'm Travis Rehl, director of product here at CloudCheckr. What that really means is, I have the fun job of figuring out what should the business do next in relation to our product offering here at the business. That means roadmap, looking at the market, what are customers doing differently now, or planning to do differently over the next year, two years or so, on cloud? What their cost strategies are, what their invoicing and chargeback strategies are, all that type of fun stuff, and how we can help accommodate those particular strategies using our product offering. Sort of, day to day, though, I would say that a bunch of my time during the day is spent talking to customers, figuring out where they are in their cloud journey, if you will, what programs or projects they may have in flight that are interesting, or complicated, or they need help on. Especially making any sort of analysis help in particular, and then lastly, taking all that information and packaging it up neatly, so that the business can make a decision to add functionality to our product in some way that can assist them move forward.Emily: The first question I wanted to ask is actually if you could talk just a little bit about the distinction between cloud-native, and cloud computing, and operating in the cloud. What do all of those things actually mean, and where's the delta between them?Travis: Sure. Yeah so, it's actually kind of interesting, and you'll hear it a little bit differently from different people. In my background, in particular—I used to run an engineering department for a managed service provider. And so we used to do a lot of project planning of companies as to what's their strategy for their software deployment of some kind on cloud. And typically the two you see for, say, cloud-native versus operating in the cloud, operating on the cloud is very atypical. You'd associate that to something like lift and shift—probably hear about a lot—the concept of taking your on-prem workload and simply cloning it, or taking it in some way and copying in some way, on to the cloud-native vendor in particular. So, literally just standing up servers of clones of hard drives and so forth, and emulating what you had on-prem, but on the cloud. That's a great technique for moving quickly to cloud. That's not a great technique if you want to be cloud-native. So, that's really the big segue for cloud-native, in particular, is you want to build a software solution that takes advantage of cloud-only technology, meaning serverless compute resources, meaning auto-scaling different types of services themselves, stuff you probably didn't have when you're on-prem originally, that you now have, you can take advantage of on the cloud. That's almost like a redesign, or reimplementation around those models that cloud itself provides to you. That, to me, is the big difference. And I see oftentimes that gap-wise, many companies who are starting on-prem, they will do the migration to cloud first, the lift and shift model, and then they will decide, “Hey, I want to redesign pieces of it to make it more cloud-native.” And then you'll see startups who don't have on-prem at all, they will just go into cloud-native from the get-go.Emily: Of course, CloudCheckr specializes in helping with costs among some other things, but how do costs fit into this journey, and what sort of cost-related concerns do companies have as they're on this cloud journey?Travis: Yeah, so there's a few. I would actually say that years ago—just to clarify, the discussion has changed over the last few years—but years ago, it started with CapEx versus OpEx costs, specifically for purchasing of your IT services. On-prem, you'd probably purchase up-front a bulk number of VMs or servers or otherwise, for a number of years, and so be a CapEx cost. When you moved over to cloud and more of this, usage-based, model kind of threw a lot of people for a loop when it came to more OpEx usage space models. AWS, Azure, GCP have helped in that regard with things like reserved instances for companies who are more CapEx oriented as well, but in terms of the initial years ago, a big hurdle was communicating that difference and how the business may pay for these services. And a lot of people were very interested in moving to OpEx back then, in particular. When it came to how do you take into account all these cost-related changes the business may go through, one of the big ones that I see most recently is around the transference and storage of data. In the past, it would have been about how much money total am I going to spend on the cloud itself. Now, it's about what am I forecasting to spend based off of those usage patterns. It's a bit easier to forecast those things when you have servers that run for a period of time, but when you have usage patterns for data ingestion, for data transfer, for servers spinning up and spinning down and scaling out horizontally, ...

Jul 15, 2020 • 39min
The Power of Aligning Engineering and Operations with Dave Mangot
Some of the highlights of the show include: The difference between cloud computing and cloud native.Why operations teams often struggle to keep up with development teams, and the problems that this creates for businesses.How Dave works with operations teams and trains them how to approach cloud native so they can keep up with developers, instead of being a drag on the organization. Dave’s philosophy on introducing processes, and why he prefers to use as few as possible for as long as possible and implement them only when problems arise. Why executives should strive to keep developers happy, productive, and empowered. Why operations teams need to stop thinking about themselves as people who merely complete ticket requests, and start viewing themselves as key enablers who help the organization move faster. Viewing wait time as waste. The importance of aligning operations and development teams, and having them work towards the same goal. This also requires using the same reporting structure. Links:Company site: https://www.mangoteque.com/LinkedIn: https://www.linkedin.com/in/dmangot/Twitter: https://twitter.com/DaveMangotCIO Author page: https://www.cio.com/author/Dave-Mangot/TranscriptAnnouncer: Welcome to The Business of Cloud Native podcast, where we explore how end users talk and think about the transition to Kubernetes and cloud-native architectures.Emily: Welcome to The Business of Cloud Native. I'm your host, Emily Omier, and today I am chatting with Dave Mangot. And Dave is a consultant who works with companies on improving their web operations. He has experience working with a variety of companies making the transition to cloud-native and in various stages of their cloud computing journey. So, Dave, my first question is, can you go into detail about, sort of, the nitty-gritty of what you do?Dave: Sure. I've spent my whole technical professional career mostly in Silicon Valley, after moving out to California from Maryland. And really, I got early into web operations working in Unix systems administration as a sysadmin, and then we all changed the names of all those things over the years from sysadmin to Technical Infrastructure Engineer, and then Site Reliability Engineer, and all the other fun stuff. But I've been involved in the DevOps movement, kind of, since the beginning, and I've been involved in cloud computing, kind of, since the beginning. And so I'm lucky enough in my day job to be able to work with companies on their, like you said, transitions into Cloud, but really I'm helping companies, at least for their cloud stuff, think about what does cloud computing even mean? What does it mean to operate in a cloud computing manner? It's one thing to say, “We're going to move all of our stuff from the data center into Cloud,” but most people you'll hear talk about lift and shift; does that really the best way? And obviously, it's not. I think most of the studies will prove that and things like the State of DevOps report, and those other things, but really love working with companies on, like, what is so unique about the Cloud, and what advantages does that give, and how do we think about these problems in order to be able to take the best advantage that we can?Emily: Dive into a little bit more. What is the difference between cloud computing and cloud-native? And where does some confusion sometimes seep in there?Dave: I think cloud-native is just really talking about the fact that something was designed specifically for running in a cloud computing environment. To me, I don't really get hung up on those differences because, ultimately, I don't think they matter all that much. You can take memcached, which was designed to run in the data center, and you can buy that as a service on AWS. So, does that mean because it wasn't designed for the Cloud from the beginning, that it's not going to work? No, you're buying that as a service from AWS. I think cloud-native is really referring to these tools that were designed with that as a first-class citizen. And there's times where that really matters. I remember, we did an analysis of the configuration management tools years back, and what would work best on AWS and things like that, and it was pretty obvious that some of those tools were not designed for the Cloud. They were not cloud-native. They really had this distinct feel that their cloud capabilities were bolted on much later, and it was clunky, and it was hard to work with, whereas some of the other tools, really felt like that was a very natural fit, like that was the way that they had been created. But ultimately, I think the differences aren't all that great, it just, really, matters how you're going to take advantage of those tools.Emily: And with the companies that you work with, what is the problem or problems that they are usually facing that lead them to hire you?Dave: Generally the question, or the statement, I guess, that I get from the CIOs and CTOs, and CEOs is, “My production web operations team can't keep up with my development teams.” And there's a lot of reasons why those kinds of things can happen, but with the dawn of all these cloud-native type things, which is pretty cool, like containers, and all this other stuff, and CI/CD is a big popular thing now, and all kinds of other stuff. What happens, tends to be is the developers are really able to take advantage of these things, and consume them, and use them because look at AWS. AWS is API, API, API. Make an API call for this, make an API call for that. And for developers, they're really comfortable in that environment. Making an API call is kind of a no brainer. And then a lot of the operations teams are struggling because that's not normal for them. Maybe they were used to clicking around in a VMware console, and now that's not a thing because everything's API, API, API. And so what happens is the development teams start to rocket ahead of the operations teams, and the operations teams are running around struggling to keep up because they're kind of in a brand new world that the developers are dragging them into, and they have to figure out how they're going to swim in that world. And so I tend to work with operations teams to help them get to a point where they're way more comfortable, and they're thinking about the problems differently, and they're really enabling development to go as quickly as development wants to go. Which, you know, that's going to be pretty fast, especially when you're working with cloud-native stuff. But I mean, kind of to the point earlier, we built—at one of the companies I worked at years ago—what I would say, like, a cloud environment in a data center, where everything was API first, and you didn't have to run around, and click in consoles, and try to find information, and manually specify things, and stuff like that; it just worked. Just like if you make a call for VM in AWS, an EC2 instance. And so, really, it's much more about the way that we look at the problems, then it is about where this thing happens to be located because obviously cloud-native is going to be Azure, it's going ...

Jul 8, 2020 • 30min
Discussing Cloud Native Security with Abhinav Srivastava
This conversation covers:How Frame.io was faced with the decision to be cloud native or cloud-enabled — and the business and technical reasons why Frame.io chose to be cloud native. How Abhinav successfully built a world class cloud-native security program from the ground up to protect Frame.io users’ sensitive video content. Abhinav also talks about the special security considerations for truly cloud native applications. Cloud native as a “journey without a destination.” In other words, there is no end point with cloud native transitions, because new technologies are always being developed.Why Abhinav is a firm believer in both ISEs and GitOps, and why he thinks the industry should embrace both of these strategies.The challenge of not only maintaining security in this type of environment, but also communicating security issues to various stakeholders with different priorities. Abinhav also talks about the role that specialists like AWS and machine learning experts can play in furthering security agendas.Common misconceptions about cloud native security.Frame.io’s decision to roll out Kubernetes, and why they are also considering adding chaos engineering to fortify against unexpected issues.Tool and vendor overload, and the importance of trying to find the right tools that fit your infrastructure. Links:Frame.io: https://frame.io/Connect with Abhinav on LinkedIn: https://www.linkedin.com/in/absri/The Business of Cloud Native: http://thebusinessofcloudnative.com TranscriptAnnouncer: Welcome to The Business of Cloud Native podcast where we explore how end users talk and think about the transition to Kubernetes and cloud-native architectures.Emily: Welcome to The Business of Cloud Native. I'm Emily Omier, your host, and today I am chatting with Abhinav Srivastava. Abhinav, can you go ahead and introduce yourself and tell us about where you work, and what you do.Abhinav: Thanks for having me, Emily. Hello, everyone. My name is Avinash Srivastava. I'm a VP and the head of information security and infrastructure at Frame.io. At Frame, I am building the security and infrastructure programs from ground up, making sure that we are secured and compliant, and our services are available and reliable. Before joining Frame.io, I spent a number of years in AT&T Research. There I worked on various cloud and security technologies, wrote numerous research papers, and filed patents. And before joining AT&T, I spent five great years in Georgia Tech on a Ph.D. in computer science. My dissertation was on cloud and virtualization security.Emily: And what do you do? What does an average day look like?Abhinav: Right. So, just to tell you where I answer the question where I work: so I work at Frame.io, and Frame.io is a cloud-based video review and collaboration startup that allows users to securely upload their video contents to our platform, and then invite teams and clients to collaborate on those uploaded assets. We are essentially building the video cloud, so you can think of us as a GitHub for videos. What I do when I get to office—apart from getting my morning coffee—as soon as I arrive at my desk, I check my calendar to see how's my day looking; I check my emails and slack messages. We use slack primarily within the company doing for communication. And then I do my daily standup with my teams. We follow a two-week sprint across all departments that I oversee. So, a standup gives me a good picture on the current priorities and any blockers.Emily: Tell me a little bit about the cloud-native journey at Frame.io? How did the company get started with containers, and what are you using to orchestrate now? How have you moved along in the cloud-native journey?Abhinav: We are born in the cloud, kind of, company. So, we are hosted in Amazon AWS since day one. So, we are in the cloud from the get-go. And once you are in the cloud, it is hard not to use tools and technologies that are offered, because our goal has always been to build secure, reliable, and available infrastructure. So, we were very, very mindful from the get-go that while we are in the cloud, we can choose to be cloud-native or just cloud-enabled. Means use tools, just virtual machines, or heavyweight virtual machines, and not to use container and just host our entire workload within that. But we chose to be cloud-native because, again, they wanted to boot up or spin up new containers very fast. As a platform we, as I mentioned, we allow users to upload videos, and once the videos are uploaded, we have to transcode those videos to generate different low-resolution videos. And that use case fits with the lightweight container model. So, from the get-go, we started using containerized microservices; orchestration layer; From AWS, their auto-scaling; automation infrastructure as a code; monitoring. so all those things were, kind of, no brainer for us to use because given our use case and given the way we wanted to be a very fast uploader and transcoder for all of our customers.Emily: This actually leads me to another question: have you guys seen a lot of scaling recently as a result of stay-at-home orders and work from home?Abhinav: Right. So, we are seeing a lot more people moving towards remote collaboration tools who are actually working in the production house since they have to work from home now. So, they are now moving to these kind of tools such as Frame.io. And we do see a lot more customers joining our platform because of that. From the traffic perspective, we did not see much increase in the web traffic or load our infrastructure, because we have always set up the auto-scaling and our infrastructure can always meet these peak demands. So, we didn't see any adverse effect on our infrastructure from these remote situations.Emily: What were some of the other advantages? Like you were talking about that you had the choice to be either cloud-enabled or truly cloud-native? What were the biggest, you know—and I'm interested, obviously in business rationale to the extent you can talk about it—for being truly cloud-native?Abhinav: So, from business perspective, again, a goal was to [basic] secure available and reliable production infrastructure to offer Frame.io services. But cloud-native actually helped us to faster time to market because our developers are just focusing on the business logic, deploying code. They were not worried about the infrastructure aspect, which is good. Then we’re rolling out bug fixes very quickly through CI/CD platform, so that, again, we offer the better [good] services to our customer. Cloud-native helped us to meet our SLA and uptime so that our customer can access their content whenever they would like to. It also helped us securing our infrastructure and services, and our cost also went down because we were scaling up and down based on the peak demand, and we don't have to provide dedicated resources, so that's good there. And it also allowed us to faster onboard developers to our platform because we are using a lot of open source technologies, and so the developers can learn q...