AWS Morning Brief

Corey Quinn
undefined
Apr 17, 2020 • 10min

Whiteboard Confessional: The 15-Person Startup with 700 Microservices: A Cautionary Tale

About Corey QuinnOver the course of my career, I’ve worn many different hats in the tech world: systems administrator, systems engineer, director of technical operations, and director of DevOps, to name a few. Today, I’m a cloud economist at The Duckbill Group, the author of the weekly Last Week in AWS newsletter, and the host of two podcasts: Screaming in the Cloud and, you guessed it, AWS Morning Brief, which you’re about to listen to.LinksCHAOSSEARCH.io@QuinnyPigHacker NewsAmazon Route 53TranscriptCorey: Welcome to AWS Morning Brief: Whiteboard Confessional. I’m Cloud Economist Corey Quinn. This weekly show exposes the semi-polite lie that is whiteboard architecture diagrams. You see, a child can draw a whiteboard architecture, but the real world is a mess. We discuss the hilariously bad decisions that make it into shipping products, the unfortunate hacks the real-world forces us to build, and that the best to call your staging environment is “theory”. Because invariably whatever you’ve built works in the theory, but not in production. Let’s get to it.On this show, I talk an awful lot about architectural patterns that are horrifying. Let’s instead talk for a moment about something that isn’t horrifying. CHAOSSEARCH. Architecturally, they do things right. They provide a log analytics solution that separates out your storage from your compute. The data lives inside of your S3 buckets, and you can access it using APIs you’ve come to know and tolerate, through a series of containers that live next to that S3 storage. Rather than replicating massive clusters that you have to care and feed for yourself, instead, you now get to focus on just storing data, treating it like you normally would other S3 data and not replicating it, storing it on expensive disks in triplicate, and fundamentally not having to deal with the pains of running other log analytics infrastructure. Check them out today at CHAOSSEARCH.io.Today, I want to rant about microservices. What are microservices you may very well ask? the broken answer to a misunderstood question. Let’s Talk about what gave rise to microservices: specifically monoliths. that is generally what predated microservices as a design pattern. And by monoliths, I mean one giant codebase, the way that grandpa used to build things. back then Git wasn’t a thing. Subversion and Perforce ruled the day, and everyone wore a pair of fighting trousers to work in the morning. The problem with monoliths was that it’s challenging in the extreme, culturally, to have a whole host of developers working on the same codebase. one person’s change can inadvertently break the build for the other 5000 engineers all working on that same codebase, and with the various version control systems that were heavily in use before Git became usable by mere mortals. There weren’t a lot of workflows that made it easy to have multiple people collaborate on the same system. So microservices, for that and a few other reasons, became suggested as a way of solving for this problem. breaking apart those ancient monoliths into functional microservices where each item does one thing began to make a lot of sense. And it solves for a political problem super neatly. You want each team responsible for a given microservice, And that promise is compelling. Because in theory, if you think about this, if you build a microservice and you publish what data that service takes in and in what format it needs to be, and what it will return in response to having that data sent to it, then what your microservice does to achieve its goal and how it works doesn’t matter at all to anyone else. You can replace the database, you can move to serverless, or containers, or punch cards. It doesn’t really matter how it does the work, just so long as the work gets done in the way that is published. And whatever decisions you make, only ever impact your own team as far as how those things get done. So the sky’s the limit, you don’t have to really focus on collaborating with folks the way that you once had. As long as that API remains stable, the sky remains the limit. So suddenly, your 5000 developers are now able to not be tightly coupled to one another and can do all kinds of things and move way faster. It’s a great model. But it’s also a problem. Why?In the late 19th and early 20th centuries, democracy flourished around the world. This was good for most folks, but terrible for the log analytics industry because there was now a severe shortage of princesses to kidnap for ransom to pay for their ridiculous implementations. It doesn’t have to be that way. Consider CHAOSSEARCH. The data lives in your S3 buckets in your AWS accounts, and we know what that costs. You don’t have to deal with running massive piles of infrastructure to be able to query that log data with APIs you’ve come to know and tolerate, and they’re just good people to work with. Reach out to CHAOSSEARCH.io. And my thanks to them for sponsoring this incredibly depressing podcast. The problem is that this works super well for places with thousands of engineers all working on one product. But then Hacker News got a hold of it, And because it’s Hacker News, it took exactly the wrong lessons from this approach and instead started embracing this pattern where it was patently ridiculous, rather than helpful. Now, as a result of this, you see startups out there with 700 microservices, but somehow only 15 employees. Where’s the sense in that? As a result, you have massive sprawl, an awful lot of duplicate work, and no real internal standards in many cases. And as a result, this misguided belief that everything should look architecturally like Google’s internal systems, Despite the fact that your entire application could run on a single computer from 2012 without breaking a sweat. Even Google would argue that its internal systems should not look like Google’s internal systems, but technical debt is a thing and the choices we make constrain what we’re able to do. The problem here is that microservices fundamentally are tied to solving a political problem by a technical means. It’s about how to make people work more effectively. And this somehow instead became a technical best practice. It’s not, not for everyone. It introduces complexity. It leads to scenarios where no one, absolutely no one has all of the various moving parts in their head. It’s prime material for here on the whiteboard confessional Because when you truly embrace microservices, your whiteboards are always full of things that nobody understands. It ties into the larger problem of building things in service to your own resume, or to your engineering department or to the world practice of engineering as some abstract ideal, rather than in service to The business needs that your company exists entirely to cater to. Plus, when you have oodles of microservices hanging around everywhere, First you have a library problem of which service does what. no one knows, And if you ever track it down, great, each one of those services also needs to be monitored....
undefined
Apr 13, 2020 • 12min

Goldilocks and the Three Elastic Beanstalk Consoles

AWS Morning Brief for the week of April 13, 2020.
undefined
Apr 10, 2020 • 13min

Whiteboard Confessional: The Rise and Fall of the T-Shaped Engineer

About Corey QuinnOver the course of my career, I’ve worn many different hats in the tech world: systems administrator, systems engineer, director of technical operations, and director of DevOps, to name a few. Today, I’m a cloud economist at The Duckbill Group, the author of the weekly Last Week in AWS newsletter, and the host of two podcasts: Screaming in the Cloud and, you guessed it, AWS Morning Brief, which you’re about to listen to.LinksCHAOSSEARCHTwitter: @QuinnyPigRed Hat Enterprise LinuxFreeBSDSaltStackPuppetClusterSSHTranscriptCorey Quinn: Welcome to AWS Morning Brief: Whiteboard Confessional. I’m Cloud Economist Corey Quinn. This weekly show exposes the semi-polite lie that is whiteboard architecture diagrams. You see, a child can draw a whiteboard architecture, but the real world is a mess. We discuss the hilariously bad decisions that make it into shipping products, the unfortunate hacks the real-world forces us to build, and that the best to call your staging environment is “theory”. Because invariably whatever you’ve built works in the theory, but not in production. Let’s get to it.On this show, I talk an awful lot about architectural patterns that are horrifying. Let’s instead talk for a moment about something that isn’t horrifying. CHAOSSEARCH. Architecturally, they do things right. They provide a log analytics solution that separates out your storage from your compute. The data lives inside of your S3 buckets, and you can access it using APIs you’ve come to know and tolerate, through a series of containers that live next to that S3 storage. Rather than replicating massive clusters that you have to care and feed for yourself, instead, you now get to focus on just storing data, treating it like you normally would other S3 data and not replicating it, storing it on expensive disks in triplicate, and fundamentally not having to deal with the pains of running other log analytics infrastructure. Check them out today at CHAOSSEARCH.io.For a long time now, I’ve been a believer in the idea of the T-shaped engineer. And what I mean by that is that you should be broad across a wide variety of technologies, but deep in one or two very specific areas. So, it looks a bit like a T, or an inverted T, depending upon how you wind up visualizing that. I’m describing this with words. I don’t have a whiteboard in front of me. Use your imagination, you’ll be okay. The point being is that whenever you’re working in a new environment, or on a new problem, having a broad base of technologies of which you’re aware, is incredibly useful to fall back upon. Now, the reason to be super deep in one or two areas, is that specialization is generally what lets people charge more for various services. People want to hire domain-specific expertise for an awful lot of problems that they want to get solved. So, having something that you can bring into job interviews and more or less mop the floor with people asking questions around that domain is an incredibly valuable thing to have.But that has some other consequences too. And that’s what today’s episode of The Whiteboard Confessional is talking about. Back in my first Unix admin job, I busily began upgrading a whole lot of the infrastructure and ripping out very early Red Hat Enterprise Linux and CentOS version 4 systems and replacing them with the one true operating system, which, of course, is FreeBSD. And I had a litany of explanation as to why it was the best option, what it could do for various problems, and why there was just absolutely no comparison between FreeBSD and anything else. I could justify it super easily, and the real defense mechanism here was that people get really, really, really tired of talking to zealots, so no one kept questioning me. They just basically said, “Fine, whatever,” and got out of the way. Years later, I decided to focus on something that wasn’t an esoteric operating system to go super deep in, and that’s right, I picked SaltStack, which is configuration management done right, tied to remote execution. I’d worked with Puppet, I’d tolerated CFEngine, but I had a bunch of angry loud opinions about it and SaltStack was absolutely the way and the light. So, in the company I was working at at that time, I rolled it out everywhere, and our entire provisioning and configuration management process was run through SaltStack. And I could come up with a whole litany of reasons why this was the right answer, and that no one else was going to be able to come close to what the ideal correctness that SaltStack provided. And people eventually stopped arguing with me, because they had better things to do than argue with a zealot about which configuration management system was the right one to go with. I’ve also talked on previous episodes of the show about using ClusterSSH. And this was before I discovered the religion that was configuration management. It was the right answer, because rather than having to run a for loop with shell scripting, which was suboptimal for a wide variety of reasons, and I would explain to everyone why it was suboptimal. So, again, they shrugged, got out of the way and let me use ClusterSSH. And a similar pattern happened when I was working with large scale storage. NetApp was the right answer for all of our enterprise storage needs because let’s face it, it wasn’t my money. And when it comes to NFS, even today, they are head and shoulders above anything else in the space. And then eventually, it turned to AWS. And for a while, I want to say around 2014, 2015, I would tell you why AWS was the right answer for every problem. What challenge are you trying to work with? Well, AWS has an answer for that, because of course, they do. Their product strategy is, “yes”. Now, what do all of those independent stories have in common? Great question. Let’s talk about that. But first…In the late 19th and early 20th centuries, democracy flourished around the world. This was good for most folks, but terrible for the log analytics industry because there was now a severe shortage of princesses to kidnap for ransom to pay for their ridiculous implementations. It doesn’t have to be that way. Consider CHAOSSEARCH. The data lives in your S3 buckets in your AWS accounts, and we know what that costs. You don’t have to deal with running massive piles of infrastructure to be able to query that log data with APIs you’ve come to know and tolerate, and they’re just good people to work with. Reach out to CHAOSSEARCH.io. And my thanks to them for sponsoring this incredibly depressing podcast. The problem is, is that everything I just mentioned, was a pet technology, or a pet company. Something that I had taken the time to get deep in and learn. And therefore, it became my de facto answer for a...
undefined
Apr 6, 2020 • 11min

Amazon Detective and the Case of the Giant AWS Bill

AWS Morning Brief for the week of April 6, 2020.
undefined
Apr 3, 2020 • 12min

Whiteboard Confessional: My Metaphor-Spewing Poet Boss & Why I Don’t Like Amazon ElastiCache for Redis

About Corey QuinnOver the course of my career, I’ve worn many different hats in the tech world: systems administrator, systems engineer, director of technical operations, and director of DevOps, to name a few. Today, I’m a cloud economist at The Duckbill Group, the author of the weekly Last Week in AWS newsletter, and the host of two podcasts: Screaming in the Cloud and, you guessed it, AWS Morning Brief, which you’re about to listen to.LinksCHAOSSEARCHRedisAmazon ElastiCacheTwitter: @QuinnyPigTranscriptCorey Quinn: Welcome to AWS Morning Brief: Whiteboard Confessional. I’m Cloud Economist Corey Quinn. This weekly show exposes the semi-polite lie that is whiteboard architecture diagrams. You see, a child can draw a whiteboard architecture, but the real world is a mess. We discuss the hilariously bad decisions that make it into shipping products, the unfortunate hacks the real-world forces us to build, and that the best to call your staging environment is “theory”. Because invariably whatever you’ve built works in the theory, but not in production. Let’s get to it.On this show, I talk an awful lot about architectural patterns that are horrifying. Let’s instead talk for a moment about something that isn’t horrifying. CHAOSSEARCH. Architecturally, they do things right. They provide a log analytics solution that separates out your storage from your compute. The data lives inside of your S3 buckets, and you can access it using APIs you’ve come to know and tolerate, through a series of containers that live next to that S3 storage. Rather than replicating massive clusters that you have to care and feed for yourself, instead, you now get to focus on just storing data, treating it like you normally would other S3 data and not replicating it, storing it on expensive disks in triplicate, and fundamentally not having to deal with the pains of running other log analytics infrastructure. Check them out today at CHAOSSEARCH.io.When you walk through an airport—assuming that people still go to airports in the state of pandemic in which we live—you’ll see billboards saying, “I love my slow database, says no one ever.” This is an ad for Redis. And the unspoken implication is that everyone loves Redis. I do not. In honor of the recent release of Global DataStore for Amazon ElastiCache for Redis. Today I’d like to talk about that time ElastiCache for Redis helped cause an outage that led to drama. This was a few years back and I worked at a B2B company—B2B of course, meaning business-to-business. We were not dealing direct-to-consumer—I was a different person then, and it was a different time, specifically, the time was late one Sunday evening, and my phone rang. This was atypical because most people didn’t have that phone number. At this stage of my life, my default answer when my phone rang was, “Sorry, you have the wrong number.” If I wanted phone calls, I’d have taken out a personals ad. Even worse when I answered the call, it was work. Because I ran the ops team, I was pretty judicious in turning off alerts for anything that wasn’t actively harming folks. If it wasn’t immediately actionable and causing trouble, then there was almost certainly an opportunity to be able to fix it later during business hours. So, the list of things that could wake me up was pretty small. As a result, this was the first time that I had been called out of hours during my tenure at this company, despite having spent over six months there at this point, so who could possibly be on the phone but my spineless coward of a boss? A man who spoke only in metaphor, we certainly weren’t social friends because who can be friends with a person like that?“What can I do for you?” “As the roses turn their faces to the sun, so my attention turned to a call from our CEO. There’s an incident.” My response was along the lines of, “I’m not sure what’s wrong with you, but I’m sure it’s got a long name, it is incredibly expensive to fix.” Then I hung up on him and dialed into the conference bridge. It seemed that a customer had attempted to log into our website recently and had gotten an error page, and this was causing some consternation. Now, if you’re used to a B2C or business-to-consumer environment, that sounds a bit nutty because you’ll potentially have millions of customers. If one person hits an error page, that’s not CEO level of engagement. One person getting that error is, sure it’s still not great, but it’s not the end of the world. I mean, Netflix doesn’t have an all hands on deck disaster meeting when one person has to restart a stream. In our case, though, we didn’t have millions of customers, we had about five and they were all very large businesses. So, when they said jump, we were already mid-air. I’m going to skip past the rest of that phone call in the evening because it’s much more instructive to talk about this with the clarity lent by the sober light of day the following morning. And the post mortem meeting that resulted from it. So, let’s talk about that. After this message from our sponsor. In the late 19th and early 20th centuries, democracy flourished around the world. This was good for most folks, but terrible for the log analytics industry because there was now a severe shortage of princesses to kidnap for ransom to pay for their ridiculous implementations. It doesn’t have to be that way. Consider CHAOSSEARCH. The data lives in your S3 buckets in your AWS accounts, and we know what that costs. You don’t have to deal with running massive piles of infrastructure to be able to query that log data with APIs you’ve come to know and tolerate, and they’re just good people to work with. Reach out to CHAOSSEARCH.io. And my thanks to them for sponsoring this incredibly depressing podcast. So, in hindsight, what happens makes sense, but at the time when you’re going through an incident, everything’s cloudy, you’re getting conflicting information. And it’s challenging to figure out exactly what the heck happened. As it turns out, there were several contributing factors, specifically four of them. And here’s the gist of what those four were. Number one, we used Amazon ElastiCache for Redis. Really, we were kind of asking for trouble. Two, as tends to happen with managed services like this, there was a maintenance event that Amazon emailed us about. Given that we weren’t completely irresponsible, we braved the deluge of marketing to that email address, and I’d caught this and scheduled it in the maintenance calendar. In fact, we specifically were allowed to schedule when that maintenance took place. So, we scheduled it for a weekend. In hindsight: mistake. When you’re having maintenances like this happen, you want to make sure that they take place when there are people around to keep an eye on things. Three, the maintenance was supposed to be invisible. The way that Amazon ElastiCache for Redis works is you have clusters, and you have a primary and you have a replica. The way that they do maintenances is they wind up updating the rep...
undefined
Mar 30, 2020 • 12min

The "AWS For God's Sake Leave Me Alone" Service

AWS Morning Brief for the week of March 30, 2020.
undefined
Mar 27, 2020 • 13min

Whiteboard Confessional: Console Recorder: The Thing AWS Should Have Built

About Corey QuinnOver the course of my career, I’ve worn many different hats in the tech world: systems administrator, systems engineer, director of technical operations, and director of DevOps, to name a few. Today, I’m a cloud economist at The Duckbill Group, the author of the weekly Last Week in AWS newsletter, and the host of two podcasts: Screaming in the Cloud and, you guessed it, AWS Morning Brief, which you’re about to listen to.LinksCHAOSSEARCHConsole Recorder on Chrome Web StoreConsole Recorder on Firefox Add-OnsScreaming in the CloudIan Mckay’s GitHub for AWSConsoleRecorderTwitter: @QuinnyPigTranscriptCorey Quinn: Welcome to AWS Morning Brief: Whiteboard Confessional. I’m Cloud Economist Corey Quinn. This weekly show exposes the semi-polite lie that is whiteboard architecture diagrams. You see, a child can draw a whiteboard architecture, but the real world is a mess. We discuss the hilariously bad decisions that make it into shipping products, the unfortunate hacks the real-world forces us to build, and that the best to call your staging environment is “theory” because invariably, whatever you’ve built works in the theory, but not in production. Let’s get to it.On this show, I talk an awful lot about architectural patterns that are horrifying. Let’s instead talk for a moment about something that isn’t horrifying. CHAOSSEARCH. Architecturally, they do things right. They provide a log analytics solution that separates out your storage from your compute. The data lives inside of your S3 buckets, and you can access it using APIs you’ve come to know and tolerate, through a series of containers that live next to that S3 storage. Rather than replicating massive clusters that you have to care and feed for yourself, instead, you now get to focus on just storing data, treating it like you normally would other S3 data and not replicating it, storing it on expensive disks in triplicate, and fundamentally not having to deal with the pains of running other log analytics infrastructure. Check them out today at CHAOSSEARCH.io.You’ll notice that I periodically refer to various friends of the show as code terrorists. It’s never been explained, until now, exactly what that means and why I use that term. So, today, I thought I’d go ahead and help shine a little bit of light on that. One of the folks that I refer to most frequently is Ian Mckay, a fine gentleman based out of Australia. And he’s built something that is both amazing and terrible all at the same time, called Console Recorder. But let me back up before I dive into this monstrosity. Let’s talk about how we get things that we build in AWS into production. There are basically four tiers. Tier one is using the AWS web console itself, we click around and we build things. Great. Tier two is we use CloudFormation like sensible folks. Tier three is Terraform with all of its various attendant tooling, and then there’s the ultra tier four that I do, which is we use the AWS console and then we lie about it. Some folks are gonna play around here and say that oh, you should use the CDK, or something like that, or custom scripts that wind up spinning up production. And all of those are well and good, but only recently did CloudFormation release the ability to import existing resources. And even then, much like Terraform import, it’s pretty gnarly and not at all terrific. So, what do you wind up generally doing? Well, if you’re like me, you’ll stand up production resources inside of an AWS account. You will click around in the console—I always start with the console, not because I don’t know how these other tools work, that’s a side point, but rather because that helps me get a sense for how these services are imagined by the teams building them. They tend to assume that everyone who interacts with them is going to go through the console at some point, or at least they should. So, it gives me access and exposure to what their vision of this service is. Then once you’ve built something up, it often finds its way into production, if you at all like me, where I’ll spin something up just to test something and it works, and oh my stars, and suddenly you just want to get it out and not worry about it, so you don’t go back and rebuild it properly. So, now you’re left with this hand-built thing that’s just flapping around out there in production. What are you supposed to do? Well, according to the AWS experts, if we’re allowed to use that term to describe them, you’re supposed to smack yourself on the forehead, be convinced that you’re fundamentally missing the boat here, throw everything you’ve just built away and go back and do it properly. Which admittedly seems a little bit on the nose for those of us who’ve done exactly this far more times over the course of our career than we would care to count. So, today, however, I posit that there’s an alternate approach that doesn’t require support from AWS, which, to be honest, long ago seems to have given up on solving this particular problem in a way that human beings can understand. And I’d like to tell you about that, after this brief message from our sponsor.In the late 19th and early 20th centuries, democracy flourished around the world. This was good for most folks, but terrible for the log analytics industry because there was now a severe shortage of princesses to kidnap for ransom to pay for their ridiculous implementations. It doesn’t have to be that way. Consider CHAOSSEARCH. The data lives in your S3 buckets in your AWS accounts, and we know what that costs. You don’t have to deal with running massive piles of infrastructure to be able to query that log data with APIs you’ve come to know and tolerate, and they’re just good people to work with. Reach out to CHAOSSEARCH.io. And my thanks to them for sponsoring this incredibly depressing podcast. This brings us back to where we started this conversation, with defining what a code terrorist was and pointing out Ian Mckay, out of Australia, who’s built something absolutely terrifying called Console Recorder for AWS. And this is a browser extension that works in Chrome, it works in Firefox, possibly others I stopped looking after those two. And what it does is you click a button in your browser when you start doing something inside the AWS console, and it does exactly what it says it would on the tin, where it starts recording what you’re doing. Their icon turns to a bright red record symbol, and you do whatever it is you’re going to do to spin something up. When you’re done, you hit the button again and say stop recording, and it opens a brand new tab. ...
undefined
Mar 23, 2020 • 10min

Watch Your Bill or They'll CloudWatch It For You

AWS Morning Brief for the week of March 23, 2020.
undefined
Mar 20, 2020 • 14min

Whiteboard Confessional: Configuration MisManagement

About Corey QuinnOver the course of my career, I’ve worn many different hats in the tech world: systems administrator, systems engineer, director of technical operations, and director of DevOps, to name a few. Today, I’m a cloud economist at The Duckbill Group, the author of the weekly Last Week in AWS newsletter, and the host of two podcasts: Screaming in the Cloud and, you guessed it, AWS Morning Brief, which you’re about to listen to.Show NotesCHAOSSEARCH.ioTwitter: https://twitter.com/QuinnyPigTranscriptCorey Quinn: Welcome to AWS Morning Brief: Whiteboard Confessional. I’m Cloud Economist Corey Quinn. This weekly show exposes the semi-polite lie that is whiteboard architecture diagrams. You see, a child can draw a whiteboard architecture, but the real world is a mess. We discuss the hilariously bad decisions that make it into shipping products, the unfortunate hacks the real-world forces us to build, and that the best to call your staging environment is “theory”. Because invariably whatever you’ve built works in the theory, but not in production. Let’s get to it.On this show, I talk an awful lot about architectural patterns that are horrifying. Let’s instead talk for a moment about something that isn’t horrifying. CHAOSSEARCH. Architecturally, they do things right. They provide a log analytics solution that separates out your storage from your compute. The data lives inside of your S3 buckets, and you can access it using APIs you’ve come to know and tolerate, through a series of containers that live next to that S3 storage. Rather than replicating massive clusters that you have to care and feed for yourself, instead, you now get to focus on just storing data, treating it like you normally would other S3 data and not replicating it, storing it on expensive disks in triplicate, and fundamentally not having to deal with the pains of running other log analytics infrastructure. Check them out today at CHAOSSEARCH.io.Historically, many best practices were, in fact, best practices. But over time, the way that we engage with systems changes. The problems that we’re trying to solve for start resembling other problems. And, at some point entire industries shift. So, what you should have been doing five years ago is not necessarily what you should be doing today. Today, I’d like to talk a little bit about not one or two edge case problems, as I have in previous editions of the Whiteboard Confessional, but rather, I want to talk about an overall pattern that’s shifted. And that shift has been surprisingly sudden, yet gradual enough that you may not entirely have noticed. This goes back into, let’s say 2012, 2013, and is in some ways the story of how I learned to speak publicly. So this is indirectly one of the origin stories of me as a podcaster, and continuing to engage my ongoing love affair with the sound of my own voice. I was one of the very early developers behind SaltStack. Salt, for those who are unfamiliar, is a remote execution framework slash configuration management system that let me participate in code development. It turns out that when you have a pattern of merging every random pull request that some jackass winds up submitting, and then immediately submitting a follow up pull request that fixes everything you just merged in, it’s, first, not the most scalable thing in the world, but on balance provides such a wonderful welcoming community, that people become addicted to participating in it. And SaltStack nailed this in the early days. Now, before this, I’d been focusing on configuration management in a variety of different ways. Some of the very early answers for this were CFEngine, which was written by an academic and is exactly what you would expect an academic to write. It feels more theoretical than it does something that you would want to use in production. But okay, Bcfg2 was something else in this space, and the fact that that is its name tells you everything you need to know about how user-friendly that was. And then the world shifted. We saw Puppet and Chef both arise. You can argue which came first, I don’t care enough in 2020 to have that conversation. But they wound up promoting a model of a domain-specific language, in Puppet’s case, versus chef where they decided, “All right, great, we’re gonna build this stuff out in Ruby.” From there, we then saw a further evolution of Ansible and SaltStack, which really round out the top four. Now, all of these systems fundamentally do the same thing, which is how do we wind up making the current state of a given system look like it should? That means, how do you make sure that certain packages are installed across all of your fleet? How do you make sure that the right users exist across your entire fleet? How do you guarantee that there are files in place, that have the right contents? And when the contents of those files change, how do you restart services? Effectively, how do you run arbitrary commands and converge the state of a remote system so it looks like it should? Because trying to manage systems at scale is awful. You heard in a previous week what happened when I tried to run this sort of system by using a Distributed SSH client. Yeah, it turns out that mistakes are huge and hard to fix. This speaks toward the direction of moving into cattle instead of pets when it comes to managing systems. And all of these systems more or less took a different approach to it. And some were more aligned with how I saw the world than others did. So I started speaking about SaltStack back in 2012 and giving conference talks. The secret to giving a good conference talk, of course, is to give a whole bunch of really terrible ones first, and woo boy were these awful. I would put documentation on the slides. I would then read the documentation to people frantically trying to teach folks the ins and outs of a technical system in 45 minutes or less. It was about as engaging as it probably sounds like. Over time, I learned not to do that, but because no one else was speaking about SaltStack I was sort of in a rarefied position of being able to tell a story, and learn to tell stories, about a platform that I was passionate about, as it engaged a larger and larger community. Now, why am I talking about all of this on the Whiteboard Confessional? Excellent question. But first, in the late 19th and early 20th centuries, democracy flourished around the world. This was good for most folks, but terrible for the log analytics industry because there was now a severe shortage of princesses to kidnap for ransom to pay for their ridiculous implementations. It doesn’t have to be that way. Consider CHAOSSEARCH. The data lives in your S3 buckets in your AWS accounts, and we know what that costs. You don’t have to deal with running massive piles of infrastructure to be able to query that log data with APIs you’ve come to know and tolerate, and they’re just good people to work with. Reach out to CHAOSSEARCH.io. And my thanks to them for sponsoring this incredibly depressing podcast. The reason I bring up configuration management across the board is not because I want to talk about the pattern of doing terrible things within it, and oh, the...
undefined
Mar 16, 2020 • 10min

The Saddest Kubernetes Hanukkah

AWS Morning Brief for the week of March 16, 2020.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app