

AWS Morning Brief
Corey Quinn
The latest in AWS news, sprinkled with snark. Posts about AWS come out over sixty times a day. We filter through it all to find the hidden gems, the community contributions--the stuff worth hearing about! Then we summarize it with snark and share it with you--minus the nonsense.
Episodes
Mentioned books

Oct 9, 2020 • 27min
The Cloud is Not Just Another Data Center (Whiteboard Confessional)
About Corey QuinnOver the course of my career, I’ve worn many different hats in the tech world: systems administrator, systems engineer, director of technical operations, and director of DevOps, to name a few. Today, I’m a cloud economist at The Duckbill Group, the author of the weekly Last Week in AWS newsletter, and the host of two podcasts: Screaming in the Cloud and, you guessed it, AWS Morning Brief, which you’re about to listen to.LinksA Cloud Guru Blog post, Lift and Shift Shot Clock: https://acloudguru.com/blog/engineering/the-lift-and-shift-shot-clock-cloud-migration The Duckbill Group: https://www.duckbillgroup.com/TranscriptCorey: This episode is sponsored in part by Catchpoint. Look, 80 percent of performance and availability issues don’t occur within your application code in your data center itself. It occurs well outside those boundaries, so it’s difficult to understand what’s actually happening. What Catchpoint does is makes it easier for enterprises to detect, identify, and of course, validate how reachable their application is, and of course, how happy their users are. It helps you get visibility into reachability, availability, performance, reliability, and of course, absorbency, because we’ll throw that one in, too. And it’s used by a bunch of interesting companies you may have heard of, like, you know, Google, Verizon, Oracle—but don’t hold that against them—and many more. To learn more, visit www.catchpoint.com, and tell them Corey sent you; wait for the wince.Pete: Hello, and welcome to the AWS Morning Brief: Whiteboard Confessional. I am again Pete Cheslock, not Corey Quinn. He is still out, so you're stuck with me for the time being. But not just me because I am pleased to have Jesse DeRose join me again today. Welcome back, Jesse.Jesse: Thanks again for having me.Pete: So, we are taking this podcast down a slightly different approach. If you've listened to the last few that Jessie and I have ran while Corey has been gone, we've been focusing on kind of deep-diving into some interesting, in some cases, new Amazon services. But today, we're actually not talking about any specific Amazon service. We're talking about another topic we're both very passionate about. And it's something we see a lot with our clients, at The Duckbill Group is people treating the Cloud like a data center. And what we know is that the Cloud, Amazon, these are not just data centers, and if you treat it like one, you're not actually going to save any money, you're not going to get any of the benefits out of it. And so there's an impact that these companies will face when they choose between something like cloud-native versus cloud-agnostic or a hybrid-cloud model as they adopt cloud services. So, let's start with a definition of each one. Jessie, can you help me out on this?Jesse: Absolutely. So, a lot of companies today are cloud-native. They focus primarily on one of the major cloud providers when they initially start their business, and they leverage whatever cloud-native offerings are available within that cloud provider, rather than leveraging a data center. So, they pay for things like AWS Lambda, or Azure Functions, or whatever cloud offering Google's about to shut down next, rather than paying for a data center, rather than investing in physical hardware and spinning up virtual machines, they focus specifically on the cloud-native offerings available to them within their cloud provider.Whereas cloud-agnostic is usually leveraged by organizations that already use data centers so they're harder pressed to immediately migrate to the Cloud, the ROI is murkier, and there's definitely sunk costs involved. So, in some cases, they focus on the cloud-agnostic model where they leverage their own data centers, and cloud providers equally so that compute resources run virtual servers, no matter where they are. Effectively, all they're looking for is some kind of compute resources to run all their virtual servers, whether that is in their own data center, or one of the various cloud providers, and then their application runs on top of that in some form.Last but not least, the hybrid-cloud model can take a lot of forms, but the one we see most often is clients moving from their physical data centers to cloud services. And effectively, this looks like continuing to run static workloads in physical data centers or running monolith infrastructure in data centers, and running new or ephemeral workloads in the Cloud. So, this often translates to: the old and busted stays where it is, and new development goes into the Cloud.Pete: Yeah, we see this quite a bit where a client will be running in their existing data centers, and they want all the benefits that the Cloud can give them, but maybe they don't want to really truly go all-in on the Cloud. They don't want to adopt some of the PaaS services because of fear of lock-in. And we're definitely going to talk about vendor lock-in because I think that is a super-loaded term that gets used a lot. Hybrid-cloud, too, is an interesting one because some people think that this is actually running across multiple cloud providers, and that's just something we don't see a lot of. And I don't think there are a lot of clients, the companies out there running true multi-cloud, I think is the term that you would really hear. And the main reason I believe that not a lot of people are doing this, running a single application across multiple clouds is that people don't talk about it at conferences. And at conferences, people talk about all the things that they do when in reality, it's so wishful thinking. And yet no one is willing to talk about this kind of, oh, we're multi-cloud in like, again, kind of, singular application world. So, one thing we do see across these three, you know, models, at a high level, cloud-native, agnostic, hybrid-cloud, the spend is just dramatically different. If you were to compare multiple companies across these different use cases. Jessie, what are some of the things that you've seen across these models that have impacted spend?Jesse: I think first and foremost, it's really important to note that this is a hard decision to make from a business context because there's a lot of different players involved in the conversation. Engineering generally wants to move into the Cloud because that's what their engineers are familiar with. Whereas finance is familiar with an operating model that does not clearly fit the Cloud. Specifically, we're talking about CapEx versus OpEx: we're talking about capital expenditures versus operating expenditures. Finance comes from a mindset of capital expenditures, where they are writing off funds that are used to maintain, acquire, upgrade physical assets over time. So, a lot of enterprise companies manage capital expenditure for all the physical hardware in their data centers. It's a very clear line item to say, “We boug...

Oct 7, 2020 • 12min
Reader Mailbag: AWS Services (AMB Extras)
Links MentionedWant to give your ears a break and read this as an article? You’re looking for this link: https://www.lastweekinaws.com/blog/reader-mailbag-aws-services/SponsorsStrongDM: https://strongdm.comLinode: https://www.linode.comNever miss an episodeJoin the Last Week in AWS newsletterSubscribe wherever you get your podcastsHelp the showLeave a reviewShare your feedbackSubscribe wherever you get your podcastsWhat's Corey up to?Follow Corey on Twitter (@quinnypig)See our recent work at the Duckbill GroupApply to work with Corey and the Duckbill Group to help lower your AWS bill

Oct 5, 2020 • 9min
No Hateration or Holleration in this Dancery
AWS Morning Brief for the week of October 5th, 2020 featuring guest host Angela Andrews.

Oct 2, 2020 • 27min
Turn on AWS Cost Anomaly Detection Right Now—It’s Free (Whiteboard Confessional)
About Corey QuinnOver the course of my career, I’ve worn many different hats in the tech world: systems administrator, systems engineer, director of technical operations, and director of DevOps, to name a few. Today, I’m a cloud economist at The Duckbill Group, the author of the weekly Last Week in AWS newsletter, and the host of two podcasts: Screaming in the Cloud and, you guessed it, AWS Morning Brief, which you’re about to listen to.TranscriptCorey: This episode is sponsored in part by Catchpoint. Look, 80 percent of performance and availability issues don’t occur within your application code in your data center itself. It occurs well outside those boundaries, so it’s difficult to understand what’s actually happening. What Catchpoint does is makes it easier for enterprises to detect, identify, and of course, validate how reachable their application is, and of course, how happy their users are. It helps you get visibility into reachability, availability, performance, reliability, and of course, absorbency, because we’ll throw that one in, too. And it’s used by a bunch of interesting companies you may have heard of, like, you know, Google, Verizon, Oracle—but don’t hold that against them—and many more. To learn more, visit www.catchpoint.com, and tell them Corey sent you; wait for the wince.Pete: Hello and welcome to the AWS Morning Brief: Whiteboard Confessional. Corey is still not back. Of course, he did just leave for paternity leave, so we will see him in a few weeks. So, you're stuck with me, Pete Cheslock, until then. But luckily, I am joined again by Jesse DeRose. Jesse, thanks again for joining me today.Jesse: Thank you for having me. You know, I have to say I love recording from home. I can't see the look in our listeners’ eyes as they glaze over while we're talking. It's absolutely fantastic.Pete: It's fantastic. It's like a conference talk, but there's no questions at the end. It's the best thing ever.Jesse: Yeah, absolutely. I love it.Pete: All right. Well, we had so much fun last week talking about a new service. Although it turns out it was new to us. It was the AWS Detective—or Amazon Detective. There's still some debate about what the actual official name of that service is. For some reason, I thought that service came out in the summertime, but it turns out it was earlier in the year. So, still a great service, AWS Detective—or Amazon Detective, whichever way you go with that one—but we had such a fun time talking about a new service that we had the opportunity of testing out an actual brand new service. This was a service that was just announced last Friday. And that's the AWS Cost Anomaly Detection service. Jessie, what is this service all about?Jesse: So, you likely would notice if your AWS spend spiked suddenly, but only the really, really mature organizations would be able to tell immediately which service spiked. Like, if it's one of your top five AWS Services by spend, you'd probably be able to know that it's spiked, you'd probably be able to see that easily in either your billing statement or in Cost Explorer. But what if you're talking about a spike in a much smaller amount of spend, that's still important to you, but it's a service that you don't spend a ton of money on: it's a service that is not a large percentage of your bill. Let's say you use Workspace, and you only spend $20 a month on Workspace. You ultimately do want to know if that spend spikes 100 percent or 200 percent, but overall, that's only maybe $20 on your bills. So, that's not something to see very easily unless it spikes exponentially. So, the existing solutions for this problem require a lot of hands-on work to build a solution. You either need to know what your baseline spend is in the case of AWS Budgets, or you need to perform some kind of manual analysis via custom spreadsheets or business intelligence tools. But AWS Cost Anomaly Detection kind of gets rid of a lot of those things. It allows you to look at anomalous spend as a first-class citizen within AWS.Pete: Yeah, the other trick too, with this anomalous spending—and I've gotten really good at learning how to spell ‘anomaly’ because I've always spelled it very wrong my entire life, but in just writing the preparatory material for this, the number of times I spelled anomaly has really solved that problem for me. Now, sometimes those mature organizations, they might see that anomalous spend, maybe the day after, maybe the week after, but I've been a part of organizations who they see that spend when the bill comes. That's actually pretty common. You're not an outlier if you only identify these outliers in spend when your bill arrives. And that outlier in spend could be something like, “Wow, we changed a script, and we're doing a bunch of list requests, and wow, we're that $8,000 come from?” or, “We're testing out Amazon Aurora and we did a lot of IOs last weekend, and our estimated bill is going to be $20,000.” Those are all things that if you're not a crazy person who's so in love with your bill that you look at it every day, you're going to miss that, right? You're just going to wait to the invoice. That's what everyone happens, right, Jesse?Jesse: Absolutely. Yeah, it has been really fascinating for us to see this pattern again and again, honestly, with some of the clients that we worked with, but also within the companies that I've worked with over the years. It's just not something that is highly thought about until finance sees the bill at the end of the month or after the end of the month, and then it becomes a retroactive conversation, or a retrospective to figure out what happened. And that's not the best way to think about this.Pete: Yeah, exactly. I mean, the best way to save money on your bill—something we see every day—is to avoid the charge, right? Avoid those extra charges. And the way you can do that is to know of an anomaly in advance. So, one of the best parts of this feature—I can't believe it, we've made it nearly five minutes into this conversation without calling out the most impressive part of Anomaly Detection—is the fact that it's all ML-powered. Now, I know what you're thinking, that you just cringed when I said ML, it's machine learning. And I cringe whenever a company markets based on machine learning. And the rule that I have is, you need to tell me how many PhDs are on your staff before I believe you can actually do machine learning.Jesse: [laughs].Pete: In the Amazon case, as it turns out, I could guess that they hire quite a few PhDs, so I feel like I'm going to give them a pass on this one.Jesse: I feel like this is going to be a fun, over-under conversation of how many PhDs were on the team that put this service together, or built the machine learning component of AWS Cost Anomaly Detection.Pete: I'll tell you what. It's good to be more than most SaaS services, that market towards machine learning.Jesse: Absolutely.Pet...

Sep 30, 2020 • 10min
Paternity Leave (AMB Extras)
Links MentionedWant to give your ears a break and read this as an article? You’re looking for this link: https://www.lastweekinaws.com/blog/paternity-leave/SponsorsStrongDM: https://strongdm.comNew Relic: https://newrelic.comNever miss an episodeJoin the Last Week in AWS newsletterSubscribe wherever you get your podcastsHelp the showLeave a reviewShare your feedbackSubscribe wherever you get your podcastsWhat's Corey up to?Follow Corey on Twitter (@quinnypig)See our recent work at the Duckbill GroupApply to work with Corey and the Duckbill Group to help lower your AWS bill

Sep 28, 2020 • 10min
Cost Anam--Anom--screw it, Cost Outlier Detection
AWS Morning Brief for the week of September 27th, 2020.

Sep 25, 2020 • 25min
Inspecting Amazon Detective (Whiteboard Confessional)
LinksThe Duckbill Group: https://www.duckbillgroup.com/TranscriptCorey: This episode is sponsored in part by Catchpoint. Look, 80 percent of performance and availability issues don’t occur within your application code in your data center itself. It occurs well outside those boundaries, so it’s difficult to understand what’s actually happening. What Catchpoint does is makes it easier for enterprises to detect, identify, and of course, validate how reachable their application is, and of course, how happy their users are. It helps you get visibility into reachability, availability, performance, reliability, and of course, absorbency, because we’ll throw that one in, too. And it’s used by a bunch of interesting companies you may have heard of, like, you know, Google, Verizon, Oracle—but don’t hold that against them—and many more. To learn more, visit www.catchpoint.com, and tell them Corey sent you; wait for the wince.Pete: Hello, and welcome to the AWS Morning Brief: Whiteboard Confessional. You are not confused. This is definitely not Corey Quinn. This is Pete Cheslock. I was the recurring guest. I've pushed Corey away, and just taken over his entire podcast. But don't worry, he'll be back soon enough. Until then, I'm joined by a very special guest, Jesse DeRose. Jesse, want to say hi?Jesse: Howdy everybody.Pete: Jesse and I are two of the cloud economists that work with Corey here at The Duckbill Group, and I convinced Jesse to come and join me today to talk about a new Amazon service that we had the pleasure—mm, you be the judge of that—of testing out recently, a service called Amazon Detective. This is a new service that I want to say was announced a couple of weeks ago, actually longer than that because, as you'll learn, it took us a little while to actually get a fully up and running version of this going, so we could actually do a full test on it. But as you can imagine, we get a chance to try out a lot of new Amazon services. And when we saw this service come out, we were pretty excited. Jesse, maybe you can chat a little bit about what piqued your interest when we first heard of Amazon Detective.Jesse: So, we here do a lot of analysis work with VPC Flow Logs. There's so much interesting data to be discovered in your VPC Flow Logs, and I really enjoy getting information out of those logs. But ultimately, digging into those logs via AWS’s existing services can be a bit frustrating; it can be a bit time-consuming in order to go through the administrative overhead to analyze those logs. So, for me, I was really excited about seeing how AWS Detective automatically allowed us to dig into some of that data, ideally more fluidly, or more organically, or naturally, to get at the same information with, ideally, less hassle.Pete: Exactly. So, for those that have not heard of AWS Detective yet, I'm just going to read off a little bit about what we read on the Amazon documentation that actually got us so excited. They talked a lot about these different security services like Amazon GuardDuty Macie, Security Hub, and all these partner products. But finding this central source for all of this data was challenging. And one of the things they actually called out which got us really excited is these few sentences. They said, “Amazon Detective can analyze trillions of events from multiple data sources such as Virtual Private Cloud (VPC) Flow Logs, AWS CloudTrail, and Amazon GuardDuty, and automatically creates a unified, interactive view of your resources, users, and the interactions between them over time.” It was actually this sentence that got us really excited because, as Jesse mentioned, we spend a lot of time trying to understand our clients’ data transfer usage. What is talking to what? Why is there charge for data transfer between certain services? Why is it so high? Why is it growing? And we spend, unfortunately, a lot of time digging around in the VPC Flow Logs. So, when we saw this, we got really excited because—well, Jesse, how do we do this today? How do we actually glean insight from Flow Logs?Jesse: It's a frustrating process. I feel like there has got to be a better way for us to get this information from a lot of our clients, and every single time we have to ask our clients to send over or share these VPC Flow Logs. There's that little wince of the implied. “I’m so sorry that we have to ask you to do it this way,” because it's doable, but it requires sinking data between S3 buckets, creating and running Athena queries, there's lots of little pieces that are required to build up to the actual analysis itself. There's no first-class citizens when it comes to analyzing these logs.Pete: It's really true. And Athena, the Data Factory—the Data Glue—what is it? Glue. You have to create a Glue Catalog. It's just a lot of work when we're really just trying to understand who and what are the top producers, consumers of data that is likely impacting spend for a client. So, we saw this and we thought to ourselves, “Wow, that one sentence it put in the list, it said, ‘The interactions between all of these resources and users over time.’” We got really excited for this. We also got excited because, of course, we love understanding how much things cost, but the pricing for Detective, it didn't seem that crazy. I mean, it's not great, but it's all based on ingested logs, which they don't really describe. So, our assumption is that if you send it your VPC Flow Logs, or CloudTrail logs, or whatever, you're going to pay for those on top of probably already paying for them today. So, that could be a deal-breaker for some clients out there.Jesse: That's the thing that was super frustrating for me, or super interesting for me is that AWS Detective, in terms of pricing and in terms of technology and capability, doesn't replace any of these other components. It is additive, which, generally speaking, I think is great, but when you start looking at it from a price perspective, that means that you're going to pay for CloudTrail logs, and VPC Flow Logs, and GuardDuty, and Macie, and all of these other services, and now you're going to pay for AWS Detective on top of that. So, it feels like you're paying twice for a lot of these services, when you could do a lot of the same analysis work yourself. And it's probably not going to be as clean to do it yourself in terms of building out the Glue Catalogs that we talked about building out, Athena tables and queries. But ultimately, it may be less expensive because it's not ultimately paying for all these additive services on top of each other.Pete: Exactly. I think we're definitely not being fair to the Amazon Detective product teams because we're trying to use this service, or we're hoping this service solves a really specific painful use case for us. And really, it's just based on what we found in their public-facing marketing.So, how does this actually work? Well, we found some really great information online via Amazon. They did a great job documenting how this all works. Essentially, you enable Amazon Detective, and you enable CloudTrail, and VPC, and GuardDuty, you have to enable it in multiple accounts, and Jesse can talk a little bit more about some of the caveats we ran into just setting it...

Sep 23, 2020 • 13min
Reader Mailbag: Billing (AMB Extras)
Links MentionedWant to give your ears a break and read this as an article? You’re looking for this link: https://www.lastweekinaws.com/blog/reader-mailbag-billing/SponsorsA Cloud Guru: https://acloudguru.comNew Relic: https://newrelic.comNever miss an episodeJoin the Last Week in AWS newsletterSubscribe wherever you get your podcastsHelp the showLeave a reviewShare your feedbackSubscribe wherever you get your podcastsWhat's Corey up to?Follow Corey on Twitter (@quinnypig)See our recent work at the Duckbill GroupApply to work with Corey and the Duckbill Group to help lower your AWS bill

Sep 21, 2020 • 9min
EC2 Gets t4gging Support
AWS Morning Brief for the week of September 21, 2020.

Sep 18, 2020 • 23min
Chef Gets Gobbled Up (Whiteboard Confessional)
TranscriptCorey: This episode is sponsored in part by Catchpoint look, 80% of performance and availability issues don't occur within your application code in your data center itself. It occurs well outside those boundaries. So it's difficult to understand what's actually happening. What Catchpoint does is makes it easier for enterprises to detect, identify, and of course validate how reachable their application is. And of course, how happy their users are. It helps you get visible and to reach a bit availability, performance, reliability, of course, absorbency. Cause we'll throw that one in too. And it's used by a bunch of interns and companies you may have heard of like, you know, Google, Verizon, Oracle, but don't hold that against them. And many more. To learn more, visit www.catchpoint.com and tell them Cory sent you, wait for the wince.Welcome to the AWS Morning Brief: Whiteboard Confessional, now with recurring perpetual guest, Pete Cheslock. Pete, how are you?Pete: I'm back again.Corey: So, today I want to talk about something that really struck an awful lot of nerves across, well, the greater internet. You know, the mountains of thought leadership, otherwise known as Twitter. Specifically, Chef has gotten itself acquired.Pete: Yeah, I saw some, I guess you would call them, sub-tweets from some Chef employees before it was announced, which is kind of common, where responses ranged from, “Oh, that's something new,” to, “Welp.” And I've thought it—I was like, “Wow, that's interesting.” Of course, then I start looking for news of what happened, of which we all found out not long after.Corey: Before we go into it, let's set the stage here because it turns out not everyone went through the battles of configuration management circa 2012 to 2015 or so—at least in my experience. What did Chef do? What was the product that Chef offered? Who the heck are they?Pete: So, Chef, they were kind of a fast follower in the configuration management space to another very popular tool that I'm sure people have used out there called Puppet. Actually, interestingly enough, the founders of Chef ran a consulting company that was doing Puppet consulting; they were helping companies use Puppet. And both of those tools really came from yet another tool called CFEngine, which in many ways—depending on who you ask—it's kind of considered the original configuration management, the one that had probably the earliest, largest usage. But it was very difficult to use. CFEngine was not something that was easy, it had a really high barrier to entry, and tools like Puppet and Chef, they came out around the, let's say 2007, 8, 9 10 timeframe, were written in Ruby which was a little bit easier of a programming language to get up and running with. And this solved a problem for a lot of companies who needed to configure and manage lots of servers easily.Corey: And there are basically four companies in here that really nailed it for this era; you had Puppet, Chef, Salt, and Ansible. And in the interest of full disclosure, I was a very early developer behind SaltStack, and I was a traveling contract trainer for Puppet for a while. I never got deep into Chef myself for a variety of reasons. First and foremost was that its configuration language was fundamentally Ruby, and my approach back then—because I wasn't anything approaching a developer—was that if I need to learn a full-featured programming language at some point, well, why wouldn't I just pivot to becoming, instead, a developer in that language and not have to worry about infrastructure? Instead, go and build applications and then work nine to five and not get woken up in the middle of the night when something broke. That may have been the wrong direction, but that was where I sat at the time.Pete: Yeah, I came at it from a different world. So, I had worked for a startup that no one has probably really ever heard of, unless you have met me before, like, know who I am, but a company called Sonian which was very early in the cloud space. It was email archiving, so it wasn't anything particularly mind-blowingly interesting because it's compliant email archiving, but what was interesting is that we were really early in the cloud space, and a lot of the tools that people use today just didn't exist for managing cloud servers. It was 2008, 2009, pretty early, EC2 timeframe. How would you provision your EC2 instance, back then? Maybe you use CFEngine, maybe use Puppet. And actually, interestingly enough, that company—Sonian—was originally a Puppet shop because Chef didn't exist yet. And there were a series of issues we ran into, technical capabilities that Puppet just couldn't do for us at the time. And again, that time being 2009, 2010, and a lot of the very early Chef team, founding team, early engineers, were really working with us very closely to bootstrap our business on Chef writing a lot of those original cookbooks that became community cookbooks. And so, my intro into Chef and the Chef community is a lot earlier than most, and I went a lot deeper with it just by nature of being so early into that space.Corey: One of the things that struck me despite not being a Chef aficionado myself was, first, just how many people in the DevOps sphere were deeply tied into that entire ecosystem. And two, love or hate whatever the product, or company, et cetera, did, some of the most empathetic people I've ever met were closely tied to Chef’s orbit. So, I have not publicly commented until now on Chef getting acquired, just because I'm trying to give the people who are in that space, time to, I guess, I don't know if grieve is the right word, but it's important to me that I don't have a whole lot to say there, and it's very easy for me to say something that comes across as crass, or not well thought out, or unintentionally insulting to a lot of very good people. So, I'm sitting here on the sidelines watching it and more or less sitting this one out, but it's deeply affected enough people that I wanted to talk about it here.Pete: Yeah. And I'm glad that we are taking this opportunity to talk about it a bit. I had a lot of thoughts and feels. I tried to write a blog post about this to try to get them down somewhere, and a couple of paragraphs into it, I just, I really couldn’t… it just seemed like a meandering random mess of words without any real destination. But a few people online have mentioned this, and I'll definitely call it out as well, which is that Chef was, it was a tool. It was a tool like any other. You either loved it or you hated it. If you hated it, you probably really loved Ansible, or you really loved Puppet. It was a really, kind of, Vim versus Emacs feel to it, where you either we're all in on it or not.But the thing that I think Chef really brought for me is not only leveling up my career in a way that I would not be where I'm at today if it wasn't for that tool and that community, but just how genuine everyone was within that community, and the interactions that we had at conferences, at Chef conferences, DevOps conferences and things of that nature, and even continued the conversations online back before Slack, which it's hard to even remember that: when we all were on IRC, and we were in the Chef IRC channel, and it was a fantastic channel with a ton of people who would dive in and help you out on your Chef problems....


