

The Bike Shed
thoughtbot
On The Bike Shed, hosts Joël Quenneville and Stephanie Minn discuss development experiences and challenges at thoughtbot with Ruby, Rails, JavaScript, and whatever else is drawing their attention, admiration, or ire this week.
Episodes
Mentioned books

Jul 12, 2022 • 49min
345: Fire Drill
Chris is getting ready to travel, and of course, Sagewell started the day with an incident, a situation, if you will...
Steph talks books perfect for vacations and feels sufficiently scarred regarding still working with moving fixtures over to FactoryBot.
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
Back to Basics: Boolean Expressions
Sarah Drasner tweet
Become a Sponsor of The Bike Shed!
Transcript:
STEPH: All right, I am now officially recording as well. Let me make sure my microphone is in front of my face.
Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari.
CHRIS: And I'm Chris Toomey.
STEPH: And together, we're here to share a bit of what we've learned along the way. So, hey, Chris, what's new in your world?
CHRIS: What's new in my world? Today is an interesting day. We are recording on a Friday, which is not normal for us, was normal for a long time and then stopped, but now it's back to being normal. But it's the morning, which is confusing. Also, I am traveling this evening. I leave on a flight going to Europe. So I'm going to do a red-eye, that whole thing. So I got a lot to pack into today, literally packing being one of those things.
And then this morning, because obviously, this is the way the world should play out, we started the day with an incident at Sagewell, a situation. Some code had gotten out there that was doing some stuff that we didn't want it to do. And so we had to sort of call in the dev team. And we all huddled together and tried to figure it out. Thankfully, it was a series of edge cases. It was sort of one of those perfect storms. So when this edge case happens in this context, then a bad thing could happen. Luckily, we were able to review the logs; nothing bad happened.
While I'm unhappy that we had this situation play out... basically, it was a caching thing, just to throw that out there. Caching turns out to be very hard. And the particular way it played out could have manifested in behavior that would have been not good in our system, or an admin would have inadvertently done something that would have been incorrect.
But on the positive side, we have an incident review process that we've been slowly incubating within the team. One of our team members introduced it to us, and then we've been using it on a few different cases. And it's really great to just have a structured process. I think it's one of those things that will grow over time. It's a very simple; what's the timeline of what happened? What's the story as to why it happened and why it wasn't caught earlier? What are the actions that we're going to take? And then what's the appendix? What's the data that we have around it? And so it's really great to just have that structure to work within.
And then similarly, as far as I can tell, the first even observable instance of this behavior in our system was yesterday morning. We saw it, started to respond to it, saw one more. We were able to chase it down in the logs. Overall, the combination of the alerting that we have in Sentry and the way in which we respond to the alerting in Sentry, which I think is probably the most critical part. Datadog is our log metrics tool right now. So we're able to go through Datadog, and we have Lograge configured to add more detail to our log lines.
And so we're able to see a very robust story of exactly what happened and ask the question, did anything actually bad happen? Or was it just possible that something bad could happen? And it turns out just possible. Nothing actually happened. We were able to determine that. We were even able to get a more detailed picture of who were all the users who potentially could have been impacted. Again, I don't think there was any impact.
But all total, it was both a very stressful process, especially as I'm about to go on vacation. It's like, oh cool, start to the day where I'm trying to wrap up things, and instead, we're going to spend a couple of hours chasing down an incident. But that said, these things will happen. The way in which we were able to respond, the alerting and observability that we had in place make me feel good.
STEPH: I like the incident structure that you just laid out. That sounds really nice in clarifying what happened when it happened in the logs. And the fact that you're able to go through and confirm if anything really bad happened or not is really nice. And I was also just debating this is one of those things, right? Right when you're about to go on vacation, that's when something's going to break. And that's like, is that good or bad? Is it good that I was here to take care of it right before, or is it bad? Because I'd really like to not be here to take care of it. [laughs] You may have mixed feelings. I have mixed feelings.
CHRIS: I think I'm happy. Unsurprisingly, this exists in one of the most complex parts of our codebase. And it involves caching. And I remember when we introduced the caching, I looked at it, and I was like, hmm, we have a performance hotspot that involves us making a lot of requests to an external system. And so we thought about it a little bit, and we were like, well, if we do a little bit of caching here, we can actually reduce that down from seven calls down to one over external HTTP. And so okay, that seems to make sense.
We had a pull request. We did a formal review. And even I looked at the pull request where this was introduced initially, and my comments on it were like, yep, this all looks good. Makes sense to me. But it's caching-related. So let's be very careful and look very closely at it and determine if there's anything, but it's so hard to know. And in fact, the code that actually was at play here was introduced a month ago.
And interestingly, the observable side effect only occurred in the past two days, which we find very surprising. But again, it's this weird like, if A happens and then within a short period after that B happens...and so it's not quite a race condition. But it was something where a lot of stuff had to happen in a short span of time for this to actually manifest. And so, again, we were able to look through the logs and see all of the instances where it could have happened and then what did happen. Everything was fine, but yeah, it was interesting.
I feel actually good to have seen it. And I think we've cleared everything up related to it and been very proactive in our response to it so that all feels good. And also, this is the sort of thing we've done this a few times now where we've had what I would call lesser incidents. There was no customer-facing impact to this. Similarly, previous incidents, we've had no or very minimal customer-facing impact.
So at one point, we had a situation where we weren't processing our background jobs for a little while. So we eventually caught up and did everything we needed to. It just meant that something may not have happened in as timely a fashion as necessary. But there were no deep ramifications to that.
But in each of those cases, we've pushed ourselves to go through the incident process to make sure that we're building the muscle as a team to like, actually, when the bad one comes, we want to be ready. We want to have done a couple of fire drills first. And so partly, I viewed this as that because again, there was smoke, but no fire is how we would describe it.
STEPH: Nice. And that also makes sense to me how you were saying y'all introduced this about a month ago, but you were just now seeing that observable side effect. I feel like that's also how it goes. Like, you implement, especially with caching, some performance improvement, and then you immediately see that. And it's like, yay, this is wonderful.
And then it's not til sometime passes that then you get that perfect storm of user interactions that then trigger some flow that you didn't consider or realize that could create an issue with that caching behavior. So yeah, that resonates. That seems right. All caching problems usually take about a month or two when you've just forgotten about what you've done. And then you have to go back in.
CHRIS: Yep. Yep, yep, yep. So now we've done the obvious thing, which is we've removed every cache from the system whatsoever. There are no caches anymore because it turns out we just can't be trusted with caches in any form whatsoever. ActiveRecord, we turned off caching, Redis we threw it out. No, I'm kidding. We still have lots of caching in the app. But, man, caching is so hard.
STEPH: I would love if that's in the project README where it says, "We can't be trusted with caches. No caches allowed." [laughs]
CHRIS: Yeah, we have not gone all the way to forbid caching within the application. It's a trade-off. But this does have that you get those scars over time. You have that incident that happens, and then forever you're like, no, no, no, we can't do X. And I feel like I'm just a collection of those. Again, I think we've talked about this in previous episodes. But consulting for as long as I did, I saw a lot of stuff. And a lot of it was not great. And so I basically just look at everything, and I'm like, urgh, no, this will be hard to maintain. This is going to go wrong. That's going to blow up someday.
And so, I'm having to work on trying to be a little more positive in my development work. But I do like that I have that inclination to be very cautious, be very pessimistic, assume the worst. I think it leads to safer code in general. There was actually a tweet by Sarah Drasner that was really wonderful. And it's basically a conversation between her and another developer. It's a pretend conversation. But it's like, "But why don't you like higher-order components?" And then it's Squints. "Well, in the summer of 2018, something bad happened, Takes a long drag of a cigarette. something very bad."
It's just written so well and captures the ethos just perfectly. Like, sit down. Let me tell you a tale of the time in 2018. [laughs] So I'll include a link to that in the show notes because she actually wrote it so well too. It's got like scene direction within a tweet and really fantastic stuff. But yeah, we'll allow some caching to continue within the app.
STEPH: That's amazing. So I was just thinking where you're talking about being more pessimistic versus optimistic. And there's an interesting nuance there for me because there's a difference in like if someone's pessimistic where if someone just brings up an idea and someone's like, "Nope, like, that's just not going to work," and they just always shoot it down, that level of being pessimistic is too much. And it's just going to prevent the team from having a collaborative and experimental environment.
But always asking the question of like, well, what's the worst that could happen? And what are the things that we should mitigate for? And what are the things that are probably so unlikely that we should just wait and see if that happens and then address it? That feels like a really nice balance. So it's not just leaning into saying no to everything. But sure, let's consider all the really bad things that could happen, make a plan for those, but still move forward with trying things out.
And I realized I do this in my own life, like when someone asks me a question around if there's something that we want to do that's a bit kind of risky. And the first thing I always think of is like, well, what's the worst that could happen? And I think that has confused people that I immediately go there because they think that I'm immediately saying no to the idea.
And so I have to explain like, no, no, no. I'm very intrigued, very interested. I just have to think through what's the worst that can happen. And if I'm okay with that, then I feel better about accepting it. But my emotional state, I have to think through what's the worst and then go from there.
CHRIS: Wow, it's a very bottom-up approach for your life planning there. [chuckles]
STEPH: Yep, I think that's, you know, it's from being a developer for so long. It has impacted now how I make other decisions. Good or bad? Who knows? Yeah, it turns out being a developer has leaked into my personal life. I've got leaky abstractions over here. So, good or bad? Who knows?
CHRIS: Leaky abstractions all the way down. Yeah, circling back to, like, I don't think I'm pessimistic per se. The way that I see this playing out often is there will be a discussion of an architectural approach, or there's a PR that goes up. And my reaction isn't no, or this has a known failure point; it is more of uh, this makes me uncomfortable. And it's that like; I can't even say exactly why, and that's what makes it so difficult.
And I think this is a place that can be really complicated for communication, particularly between developers who have been around for a little bit longer and have done this sort of thing and have gathered these battle scars and developers who are a bit newer. Having that conversation and being like, um, I can't say exactly why. I can tell you some weird stories. I might not even remember the stories. Some of it just feeds into just like, does this code make me uncomfortable? Or does this code make me happy? And I tend towards wildly explicit code for these reasons.
I want to make it as clear as possible and match as close as possible to the words that we're saying because I know that the bugs hide in the weird corners of our code. So I try and have as few corners. Make very rounded rooms of code is a weird analogy that doesn't play, but here we go. That's what I do on this show is I make weird analogies.
Actually, we were working on some code that was dealing with branching conditional things. So we had a record which has a boolean value on it. So we've got true or false, and then we've got two states, and then we've gotten an enum with three states. So all total, we have six possible states. But as we were going through this conversation, I was pairing with another developer on the team. And I was like, something feels weird here. And I actually invoked the name of Joël Quenneville because much of the data structure thought that I had here I associate with work that Joël has done around Maybe and things like that.
And then also, my suggestion was let's build a truth table because that seems like a fun way to manage this and look at it and see what's true. Because I know that there are spots on this two-by-three grid that should never happen. So let's name that and then put that in the code. We couldn't quite get it to map into the data type, like into that Boolean in the enum. Because it's possible to get into those states, but we never should. And therefore, we should alert and handle that and understand, like, how did this even happen? This should never happen.
And so we ended up taking what was a larger method body with some of the logic in it and collapsing it down to very explicitly enumerate the branches of the conditional and then feed out to a method. Like, call a method that had a very explicit name to say, okay, if it's true and we're in this enum state, then it's bad, alert bad. And then the other case like, handle the good case.
And I was very happy with what we refactored down to because this is another one of those very complex parts of our code. Critical infrastructure-y is how I would describe it. And so, in my mind, it was worth the I'm going to go with pathological refactoring that we got to there. But yes, I was channeling Joël in that moment. I'm very happy to have had many conversations with him that help me think through these things.
STEPH: That's awesome. Yeah, those truth tables can be so helpful. There's a particular article that, of course, Joël has written that then describes how a truth table works and ways that you can implement it into your habits. It's called Back to Basics: Boolean Expressions. I will be sure to include a link in the show notes.
CHRIS: But yeah, I think that summarizes my day and probably the next couple of days as I prepare for an adventure over to Europe and chat about developer spidey sense. But yeah, what's new in your world?
STEPH: Yeah, that's a big day. There's a lot going on. Well, I actually want to circle back because you mentioned that you're packing and you're going on this trip. And I'm curious, do you have any books queued up for vacation?
CHRIS: I do, yeah. I'm currently reading Elantris by Brandon Sanderson. Folks might be aware of his work from the highest-funded Kickstarter of all time, which was absurd. Did you see this happen?
STEPH: I don't think so, uh-uh.
CHRIS: He did this fun, cheeky little Kickstarter. The video was sort of a fake around...oh, it almost sounded like he might be retiring or something like that. And then he's like, JK, I wrote five new books. And so the Kickstarter was for those books with different tiered packages and whatnot. I think he got just the right viral coefficient going on. And apologies for the spoiler if anyone's not seen the video, but it's been out there for a while.
So he wrote some books, and that's what the Kickstarter is for. You get some books. You sort of join a book club, and you'll get one a quarter. A million dollars seems like that will be a bunch for that. That'd be great. If he raised a million dollars, that'd be amazing. $40 million four-zero million dollars. [laughs] I'm just watching it play out in real-time as well. It just skyrocketed up. The video, I think, was structured just right. He got it onto the...it was on Reddit and Twitter and just bouncing around, and people were sharing it. And just everything about it seemed to go perfectly. And yes, the highest-funded Kickstarter of all time, I believe, certainly within the publishing world.
But yeah, Brandon Sanderson, prolific author, and his stuff ends up just being kind of light and fun. And so I was reading Elantris for that. It's been a little bit slower to pick up than I would like. So I'm now in the latter half. I'm hoping it'll go a little bit more quickly and be...I'm just kind of looking for a fun read, some fantasy thing to go on an adventure.
But as the next book, I downloaded a second one just to make sure I'm covered. I have a book by John Scalzi, who's a sci-fi, fantasy, more on the sci-fi end of the spectrum. And I've read some of his other stuff and enjoyed it. And this particular book has a very consistent set of reviews. I've read the reviews a few times. And everybody who reviews it is just like, "This isn't the greatest book I've ever read, but man was it a fun ride." Or "Yeah, no, best book? No. Fun book? Yes." And just like, "This book was a fun ride. This was great." And I was like, perfect. That is exactly what I'm looking for on a European vacation.
The book is called The Kaiju Preservation Society, which also plays on monsters, Pacific Rim Godzilla. Kaiju, I think, is the word for that category of giant dinosaur-like monster. And so it's the Kaiju Preservation Society, which, I don't know, means some stuff, and I'm going to go on a fun adventure. So yeah, those are my books.
STEPH: Nice. I've got one that I'm reading right now. It's called Clementine: The Life of Mrs. Winston Churchill, written by Sonia Purnell. And Sonia Purnell tends to focus on female historical figures. And so it's historical fiction, which is a sweet spot for me. The only thing I'm debating on is because I'm realizing as I'm reading through it, I'm questioning, okay, well, what's real and what's not? Because I don't want to be that person that's like, did you know? And then, I quote this fictional fact about somebody that was made up for the novel. [laughs]
So I'm realizing that maybe historical fiction is fun, but then I'm having to fact-check all the things because then I'm just curious. I'm like, oh, did this really happen, or how did it go down? So it's been pretty good so far. But then it makes me wish that historical fiction novels had at the back of them they're like, these are all the events that were real versus some of the stuff that we fictionalized or added a little flair to. I'm in that interesting space.
I also like how you highlighted that you chose a fun book. I was having a conversation with a colleague recently about downtime. And like, do you consume more tech during downtime? Like, are you actively looking for technical blog posts or technical books to read or podcasts, things like that? And I was like, I don't. My downtime is for fun. Like, I want it to be all the things that are not tech.
Maybe some tech sneaks in there here and there, but for the most part, I definitely prioritize stuff that's fun over more technical content in my spare time, which has taken me a little while to not feel guilty about. Earlier in my career, I definitely felt like I should be crunching technical content all the time. And now I'm just like, nope, this is a job. I'm very thankful that I really enjoy my job, but it's still a job.
CHRIS: It is an interesting aspect of the world that we work in where that's even a question. In my previous life as a mechanical engineer, the idea that I would go home and read about mechanical engineering...I could attend a conference, but I would do that for very particular reasons and not because, like, oh, it's fun. I'll go meet my friends. For me, this was a big reason that I moved into tech because I am one of those folks who will, like, I will probably watch a video about Remix in particular because that's my new thing that I like to play around with and think about.
But it needs to be a particular shape of thing I've found. It needs to be exploratory, puzzle-y. Fun code, reading, learning work that I do needs to be separated from my work-work in a certain way. Otherwise, then it feels like work, then it is sort of a drudgery. But yeah, my brain just seems to really like the puzzle of programming and trying to build things. And being able to come into a world where people share as much as they do blogs and conference talks and all of that is utterly fantastic.
But it is a double-edged sword because I 100% agree that the ability to disconnect to, like, work a nine-to-five and then go home at the end of the day. Yeah, go home, you know, because you remember when we went to an office and then we would go home afterwards? I have to commute every once in a while into the city and --
STEPH: You mean go downstairs or go to another room? That's what you mean? [laughs]
CHRIS: I used to commute every day, and it took a lot of time. And now when I do it, I feel that so viscerally because I'm like, it's just a lot easier to just walk to my office in my house. But yes, I 100% I'm aligned to that like, yeah, no, you're done with work for the day, walk away. That's that.
And learning a new technology or things like that, that's part of the job. There shouldn't be the expectation that that just happens. There's continuing education in every other field. It's like, oh, we'll pay for your master's degree so you can go learn a thing. That's the norm in every other...not in every other industry but many, many, many industries.
And yet the nature of our world the accessibility of it is one of the most wonderful things about it. But it can be a double-edged sword in that if there are the expectations that, oh yeah, and then, of course, you're going to go home and have side projects and be learning things. Like, no, that is an unreasonable expectation, and we got to cut that off. But then again, I do do that. So I'm saying two things at the same time, and that's always complicated.
STEPH: But I agree with what you're saying because you're basically respecting both sides. If people enjoy this as a hobby, more power to you.; that's great. This is what you enjoy doing. If you don't want to do this as a hobby and respect it as a job, then that's also great too. There can be both sides, and no side should feel guilty or judged for whichever path that they pursue.
And I absolutely agree, if there are new skills that you need to learn for a job, then there should be time that's carved out during your work hours that then you get to focus on those new skills. It shouldn't be an expectation that then you're going to work all day and then spend your evening hours learning something else. And same for interviews; there shouldn't be a field that says, "Hey, what are your side projects?" Or at least that should not be an important part of the interview.
There should be an alternative to be like, "Or what work code do you want to talk about?" Or something else that's more in that nine-to-five window that you want to talk about. That way, there's a balance between like, sure, if you have something that you want to talk about on the side, great, but if not, then let's focus on something that you've done during your actual work hours because that's more realistic.
CHRIS: I do think there's an interesting aspect at play because the world of development moves so rapidly and because it's constantly changing. And to frame it differently, I don't think we've got this thing figured out. And so many people lament how quickly it changes and that there's a new framework every other week. And there's a bit of churn that is perhaps unnecessary.
But at the same time, I do not feel like as a community, as a working population, that we're like, yeah, got it, crushed it. We know how to make great software, no question about it. It's going to be awesome. We're going to be able to maintain it for forever, don't even worry about it. New feature? We can get that in there. They're actually still pretty rare.
So we need to be learning, and evolving, and exploring new techniques. I think the amount of thinking is probably good mostly in the development world. But organizations have to make space for that with their teams. And thoughtbot obviously does that with investment days. That's just such a wonderful structure that embraces that reality and also brings happiness, and it's just a pleasant way to work. And frankly, my team does not have that right now.
We do the crispy Brussels snack hour, which also now has a corresponding crispy Brussels work lunch, which is one week we think about it, and the next week we do the thing. We're trying to make space for that. But even that is still more intentional and purposeful and less exploratory and learning. And so it's an interesting trade-off. I deeply believe in this thing, and also, the team that I'm leading isn't doing it right now. Granted, we're an early-stage startup. We got to build a bunch of stuff. I think that's fine for right now. But it is a thing that...again, I'm saying two things at the same time, always fun.
STEPH: Well, and there might be a nice incremental approach to this as well. So thoughtbot has the entire day, and maybe it's less than a full day. So perhaps it's just there's an hour or two hours or something like that where you start to introduce some of that self-improvement time and then blossom out from there. Because yeah, I understand that not all teams may feel like they have the space for that.
But then I agree with everything else you said that it really does improve team morale and gives people a space to then be able to get to explore some of those questions that they had earlier. So then they don't feel like they have to then dedicate some weekend time or off hours’ time to then look into a question.
And I admit, I'm totally guilty too. I am that person that then I've worked extra hours, but it's because, like you said, if there's a puzzle that my brain is stuck on and I just feel the need to get through it. But then I look at that as am I doing this because I want to? Yes. Okay, then as long as I'm happy and I don't feel like this is increasing any concern around burnout, then I don't worry about it.
MIDROLL AD:
Debugging errors can be a developer’s worst nightmare… but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers, that can actually help you cut your debugging time in half.
So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM enables developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps and includes modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
Circling back to your original question about what's going on in my world, and you mentioned scarring earlier. I feel sufficiently scarred [laughs] in regards to still working with moving fixtures over to FactoryBot. This week has really confirmed that fixtures don't trigger a lot of the callbacks, the model callbacks that exist. And so this really means that you can just create bad data that your application doesn't actually allow your application to create.
So there are tests that are exercising behavior that should never exist. And then porting that over to FactoryBot then highlights that because then as soon as I move that record over and then try to create it or do something with it, then the app, the test, do the right thing and let me know saying no, no, no, we've added validations. You can't do that anymore. That has been grinding my gears in terms of trying to then translate. Because then I have to really dive into the code to understand it. And the goal here is to stay as high level as possible and not have to dive in too much. But then that means that I do have to dive in and understand more.
So this has frankly just been one of those times in my career where you just kind of have to slog through the work. It's important work to be done. It'll be great once it's done. But it's a painful process. And the best way that I've found to make it more enjoyable is to be in heavy communication with Joël, who's on the project with me, just so if we get stuck on something, then we can chat with each other.
And then also there's one file that's particularly gnarly. And so we moved over one test. We were successful, and which felt great because then we could at least document like, okay, when we come back to this, at least we have one example that highlights the wonkiness that we ran into. But we've decided, okay, we're done with that file. We're going to take a break. There's a lot there, but we're going to move on and give ourselves a break and do some of the easier ones, and then we'll circle back to the harder one.
Which was, I think, just a bit of bad luck in terms of, like, as we're going down the list, that happened to be like the gnarliest one, and it was like the first one that Joël picked up. And so I'm going through a couple of files, and Joël is like, "What? [laughs] How are you making progress?" And we realize it's just because that file, in particular, is very hard to find all the mystery guests and then to move everything over.
Finding a positive note through all of the cruft, I will say this is helping with some of my code sleuthing skills. So as I am running into these problems and then looking for mystery guests, I'm noticing ways that I can then, as quickly as possible, try to triage and identify as to why one test doesn't match another test. Some of it is more specific to the application setup, so it won't be as applicable to future projects. But then some other areas have been really helpful.
Like, I'm using caller a lot more to understand, like, I know this is getting called, but I don't know who's calling you. So I can put in a line that basically outputs like, show me your stack traces to how you got here. So that's been really nice as well. So it has improved some of my code sleuthing skills and also my spidey sense in terms of it's typically mystery guests.
Like when a test isn't passing, it's because fixtures are creating extra data that are getting pulled in when there are queries that are being run. But they're not explicitly referenced in the test setup itself. So that's typically then where I start is looking for what record looks relevant to this test that I haven't pulled over to my test setup.
CHRIS: I appreciate you finding the silver lining, the positive bit of this. Because as you're describing, the work that you're doing sounds like I think you use the word slog, which seems like a very accurate term. But sometimes we have to do that sometimes for a variety of reasons. We end up either having to introduce new code or fix old code, but this is sometimes the work.
And this is something that I think you and I share about this show is we get to show all sides of the work. And the work can be glamorous and new. And oh, I've got this greenfield app that I'm building, and it's wonderful. Look at the architecture. And I know in the moment that I'm building someone else's legacy code three years from now. [laughs]
And so telling the other side of the story and providing that rounded point of view, because like, yeah, this is all part of it. Again, I don't believe that this is a solved problem, building robust software that we can maintain. And so yeah, you're doing the good work in there. And I thank you for sharing it with us.
STEPH: Thanks. Just don't use fixtures in your test, I beg of you. Please don't do that to the legacy code that you're writing for future developers. [laughs] That is my one request.
CHRIS: And I will maybe add on to that, sparingly use callbacks. Maybe don't use them at all, and certainly don't use the combination because, my goodness, that'll lead you into some fun times. But yeah, just two small recommendations there.
STEPH: Oh, there's something else I wanted to share. I saw that Slack added a new audio feature that allows you to record the pronunciation of your name, which is the feature that I was so excited about when we added it to our internal tool called Hub at thoughtbot. And now Slack has it on their profile so that way you can upload the pronunciation. And then anyone looking at your profile can then listen to how to pronounce your name.
There are a couple of other features that they released, I think just in June, so about a month ago from the recording of today. [laughs] That's weird to say, but here we are. So I'll include a link in the show notes so folks can see that feature in addition to others, but I'm super excited.
CHRIS: Oh, that is nice. I also like all right, so Slack now has it. Hub now has it. But I don't have access to Hub anymore. And I don't have access to every Slack in the world yet. But here's my suggestion. All right, everybody, stick with me here. I want you to own a domain. I want you to have a personal site on it. And I want the personal site to include the pronunciation of your name.
I get that that's a big ask. And I get that there are other platforms that are calling to you, and you may be writing on those. But you know what? Just stand up a little site, just a little place on the internet that you own. And if it includes the pronunciation of your name, I will be forever grateful.
STEPH: I like this idea. I initially was taking your idea and immediately running with it as you were speaking it because then I wondered if everyone had their own YouTube channel. But I don't know how hard it is to create a YouTube channel. I am not a YouTube channeler, so I don't know what that looks like. [laughs] But not everybody will know how to purchase a domain. So that might be another approach.
CHRIS: I think it's pretty easy to do a YouTube channel. I'm conflating a couple of things. This is my basket of beliefs about people on the internet, but I kind of think everybody should own their own little slice of the internet. And so totally, YouTube is a place where the people make some stuff, make videos, put them on YouTube, absolutely. But ideally, you own something. I see a lot of people that are on YouTube, and that's it, and so their entire audience lives on YouTube. And if YouTube someday decides to change or remove them or say Medium as an example, Medium actually, I think, does a more interesting version of this where your identity kind of gets subsumed into Medium.
And I really think everybody should just have their own little, tiny slice of the internet that's there. It has their name that they own that no platform can decide; hey, we've shifted, and now your stuff is gone. Cool URIs don't change as they say, and that's what I want. And then yeah, if you can have the pronunciation of your name on there, that's extra nice.
Although I say that, and I don't know that I would do it because my name feels very obvious. One day someone was like, "Oh, how do you pronounce your last name?" I forget if I actually replied with the pronunciation. Or if I was like, "I need to know what options you're considering. I'm so interested because I've really only got the one." Maybe I'm anchored. Maybe I'm biased. [chuckles] I've been doing this for a while. But I really cannot think of another pronunciation of my name.
STEPH: You might hear another one that you really like, and you need to pivot.
CHRIS: Oh gosh.
STEPH: That's the point where you start pronouncing your name differently.
CHRIS: Wow, that would be a lot. And then, I could have a change log on my personal site where people can see this is the pronunciation, and this is what the pronunciation used to be.
STEPH: [laughs] I like this idea. I also like this idea that everybody has their own slice of internet land. I like this encouragement that you're providing for everyone.
On a slightly different note, there's a blog post that I'm really excited to talk about. It's written by Eric Bailey, who's a former thoughtboter. It's called The Optics of Pair Programming. And given how much pair programming that I'm doing, especially with Joël on the current project, it was a really wonderful read.
And it also helped me think about pairing from a different perspective because we do have a very strong pairing culture at thoughtbot. So there's a lot of nuance, especially social nuances that can go along with when you invite someone to pair with you that I had not considered until I read this wonderful post by Eric. And we'll be sure to include a link in the show notes.
But to provide an overview, essentially, Eric shares that given coming from thoughtbot where we do have a very open approach to pairing where pairing sessions are voluntary and then also last as long as the problem will last...but then when you're at a new company, you could experience pushback if you're inviting someone to pair and then to consider why that pushback may exist. And some of the high-level areas that Eric highlighted are power dynamics, assessment, privacy, and learning styles.
So to dive into each of some of those, there's a power dynamics of it's important to consider who's offering to pair. So if I've joined a team as a consultant, there may be a power dynamic there that someone is feeling where their team is paying for my time. So they may feel like they can't say no if I offered to pair. They feel like they need to say yes to the invitation, even if they don't really want to.
Or probably a more classic example would be like, what if your boss wants to pair or someone that's just more senior than you? Then it could leave you feeling like, well, I can't say no to this person, can I? Which yes, you totally can say no to that person, but it may leave you in a place where you feel like you can't. And so, it puts you in this sort of uncomfortable and powerless position.
The other one is assessment, so offering to pair with someone could feel like you are implying that you want to assess their skills or that you're implying that they're not up to the task and therefore they need your help. So then that could also place someone in an uncomfortable position. There's also privacy. So someone who isn't confident may not want someone to observe their behavior or observe how they're working. It could make them feel really anxious, which then I love that Eric points this out.
Ironically, pairing is really good at addressing that lack of confidence because then you get to see how other people work through their problems or how they think, or they may also have some anxiety. Or it just helps you become more comfortable in talking and thinking through with other people. So that one is a tough one where it's hard to get over that initial hurdle. But actually, the more you pair, then the less anxious you'll feel when you pair.
And then there's also learning styles because pairing really involves a lot of deep thinking but in our personal time. And it can be hard to balance both of those, and it's just not as effective for some people. So I know that even as much as I really enjoy pairing, I just need to sit with code on my own sometimes. I need to think about it. I need to run it; I need to look at it.
So it's really nice to talk with someone. But then I also need that alone time to then just think through it on my own because I can't have that same deep focus if I'm also worried about how the other person is experiencing that session because then my mental energy is going towards them.
So that covers a number of the social nuances that can be included or running through someone's mind when you extend an invitation to them to pair. And it really resonated with me the areas that Eric highlights in this blog post. He also talks about a couple of strategies, which I'd love to dive into as well. But I'm going to pause here and see what thoughts you have.
CHRIS: Yeah, I love this post. And it got me thinking about pairing and the broader human backdrop of all of the processes and workflows that we have. Everything he highlighted about pairing feels true. Although similar to you and to Eric, I've worked in a context where pairing was a very natural, very regular part of the work and sort of from the very top-down. And so everyone pairing between developers of any different level or developers and designers or really anyone in the...it was just such a part of how we worked that no one really questioned it or at least not after the first couple of weeks.
I imagine joining thoughtbot those first weeks; you're like, oh God. As I shared, I think in the previous episode that we recorded, my pairing interview was with Joe Ferris, the CTO of thoughtbot, [laughs] writing a book about good and bad code. And I was like, I don't know what anything is here but very quickly getting over that hurdle. And having that normalizing experience was actually really great, and then have been comfortable with it since. But the idea that there are so many different social dynamics at play feels true.
And then as I think about other things, like stand-up is one that I think of as this very simple this is a way to communicate where we're at. And where necessary or where useful, allow people to interject or step in to say, "Oh, let me help you get unblocked there or whatever it is." But so often, I see stand-up being a ritual about demonstrating that you are, in fact, doing work, which is like, here's what I did yesterday. I don't know if it's useful. Then mention that you're working on this project. But the enumeration of look, obviously, work was done by me. You can see it; here are the receipts. It's very much this social dynamic at play.
And retro is another one where like, if retro is very much owned by one voice and not a place that change actually happens where people feel safe airing their opinions or their concerns, then it's going to be a terrible experience. But if you can structure it and enforce that it is a space that we can have a conversation, that everyone's voice is welcome and that real change happens as a result of, then it's a magical tool for making sure we're doing the right things. But always behind these are the people, and feelings, and the psychology at play. And so this was just such an interesting post to read and ruminate on that a little bit more.
STEPH: Yeah, I agree, especially with a comment that you made about those daily syncs where I really just want to focus on today and what you have that you're blocked on. So it's a really nice update in case there are any cross-collaboration opportunities. That's really what I'm looking for in a daily update. And so I appreciate when people don't go through a laundry list of what they did yesterday because it's like, that's great. But then, like you said, it's just like you're trying to prove here's what I've done, and I trust you; you're working. So just let me know what you're doing today, friend.
So Eric does a wonderful job of also including some strategies for ways that then you can address some of these concerns and then how there may be some extra anxiety that's increased when you're inviting somebody to pair. There are some wonderful strategies. I'll let folks read through the blog post itself.
There are a couple in particular that came to mind for me because I was then self-assessing how do I tend to approach pairing with someone? And some ways that I want them to feel very comfortable with that experience. And there's a couple. There's one where I recognize that I need to build trust with each person. I can't just go on to a team and expect everyone to know that I have good intentions and that I'm going to do my best to be a fun, helpful pairing partner, and that it's not a zone of judgment. And that has to be cultivated with each person.
Because especially as a consultant, if I'm joining a team, the people who hired me are not necessarily the people that I'm working with. It's someone that's probably in leadership or management that has then brought on thoughtbot. And so then the people that I'm working with they don't know me, and they don't know what my pairing style is going to be. So looking for ways to build trust with each person and then also inviting them or asking for help myself.
So there's a bit of vulnerability that has to be shown to build trust with someone to say," Hey, I'm stuck on a problem. I would love a second set of eyes. Would you be willing to help me out with this?" So then that way, they're coming in to help me initially versus I'm going in and saying, "Hey, can I help you?" I have found that to be an effective strategy.
And there's one that I do really want to talk about, and that's not everyone is going to pair well together. Like, you may find someone who always leaves you feeling just stressed or demoralized. And while it's important to consider your role and why that's true, that does not mean it's your fault and necessarily your problem to fix.
So similar to having to manage up, you may need to coach the person that you're pairing with in ways that help you feel comfortable pairing. But if they don't listen to your requests and implement any of that feedback, then just don't pair with that person. That is a very fine option to recognize people that are not receptive to your needs and, therefore, not someone that you need to then force into being a great pairing buddy.
And I emphasize that last one because it took me a little while to become comfortable with that and accepting that it wasn't my fault that I wasn't having a great pairing session with people. Similar to when I'm learning from someone that if someone is explaining something to me and they're making me feel inadequate while they're explaining it to me, that's not necessarily my fault. Like, I used to internalize that as like, oh, I just can't get this.
But I am now a very staunch believer in if you can't explain it to me in a way that I understand, then that's probably more on you than on me. And that has also taken me time to just really accept and embrace. But once you do, it is so freeing to realize that if someone's explaining a concept and you're still not getting it, it's like, hey, how can we try harder together versus you just making me try harder?
CHRIS: I like that right there of like, if I don't understand this, it may actually be you, not me, or something to that effect. Let's get that on a bumper sticker and put that in The Bike Shed store so that everybody can buy it and put it on their cars or at least just us. But yeah, that starting from the bottom sometimes it's just not going to work great. There are even...I think what you're describing sounds a little more complicated, individuals who are personally not great at communicating or pairing or things like that. And that's going to happen.
We're going to run into folks that...pairing is communication. That's just the core of it, and some folks, that may not be their strongest suit. But I think there's another category of just like different working styles. And whereas I might...judge is such a heavy word, but I'm going to use it. I might judge someone who is not doing a great job at communicating to someone else, or understanding their point of view, or striving to do that, or taking feedback. Like, those are not great things.
Whereas there may just be two different development styles or backgrounds, or there are other reasons that actually they may be not an ideal fit. That said, I have definitely found that in almost every variation of pairing, I've seen work at some point. Like, when I was very early on in my career pairing with folks that are very senior, I didn't get most of it, but I got some stuff. And then folks that are very much on the same level or folks that have a deep knowledge in framework, code base language, whatever and folks that are new to it but have a different set of experiences.
Basically, every version of that, I found that pairing is actually an incredibly powerful technique for knowledge sharing, for collaboration, for all of that. So although there are rare cases where there might be some misalignment, in general, I think pairing can work.
I do think you hit on something earlier of there are certain folks that are more private thinkers, is how I would describe it, where thinking out loud is complicated for them. I'm very much someone who talks. That's how I figure out what I think is I say stuff. And I'm like, oh, I agree with what I just said. That's good. But I find I actually struggle. There's something I think of...maybe I'm just a loudmouth is what I'm hearing as I say it, but that is how I process things. Other folks, that is not true.
Other folks, it's quite internal, and actually trying to vocalize that or trying to share the thought process as they're going may be uncomfortable. And I think that's perfectly reasonable and something that we should recognize and make space for. And so pairing should not be forced upon a team or an individual because there are just different mindsets, different ways of thinking that we need to account for.
But again, the vast majority of cases...I've seen plenty of cases where it's someone's like, "I don't like to pair. That's not my thing." And it's actually that they've had bad experiences. And then when they find a space that feels safe or they see the pattern demonstrated in a way that is collegial, and useful, and friendly, then they're like, oh, actually, I thought I didn't like pairing. I thought I didn't like retro. I thought I didn't like stand-up. But actually, all of these things can be good.
STEPH: Yeah, absolutely. It's a skill like anything else. You need to see value in it. And if you haven't seen value in it yet or if it's always made you anxious and uncomfortable, then it's something that you're going to avoid as much as possible until someone can provide a valuable, positive experience around how it can go.
I'm going to pull back the curtains just a little bit on our recording and share because you've mentioned that you are very much you think out loud, and that's how you decide that you agree with yourself. And I think already at least twice while we've been recording this episode, I have started to say something, and I'm like, no, wait, I don't agree with that and have backed myself up.
CHRIS: [laughs]
STEPH: And I'm like, no, I just thought through it; I'm going to cancel it out, [laughs] and then moved in a different direction. So I, too, seem to be someone that I start to say things, and I'm like, oh, wait, I don't actually agree with what I just said [laughs], so let's remove that.
CHRIS: Yep. You've described it as Michael Scott-ing on a handful of different episodes or maybe things that were cut from episodes. But where you start a sentence and then you're like, I don't know where I was going to end up there. I hoped I'd figure it out by the end, but then I did not get there. And yeah, I think we've all experienced that at various times.
STEPH: That’s some of my favorite advice from you is where you've been like, just lean into it, just see where it goes. Finish it out. We can always take it out later. [laughs] Because I stop myself because I immediately start editing what I'm trying to say and you're like, "No, no, just finish it, and then we'll see what happens." That's been fun.
CHRIS: This is how you find out what you think. You say it out loud, and then you're like, never mind. That was ridic –
STEPH: [laughs]
CHRIS: I do. Actually, now I'm thinking back, and I have plenty of those where I'll say a thing, and I'm like, nope, never mind, send that one back. [chuckles] As an aside, so we do this thing where we host a podcast, and we get to talk. But we're both now describing the pattern where we'll start to say something, and we’ll be like no, no, no, actually, not that. And I think, dear listeners out there, you probably don't hear any of this, the vast majority of it, because we have wonderful editors behind the scenes, Thom Obarski for many years, and now Mandy Moore, who's been with us for a while.
And so once again, thank you so much to the editor team that allows us to, I think, again, feel safe in this conversation that we can say whatever feels true and then know that we'll be able to switch that around. So thank you so much to the editors who help us out and make us sound better than we are.
STEPH: Yeah, that has made a big difference in my capabilities to podcast. If we were doing this live, ooh goodness, this might be a whole different, weird show. [laughs]
CHRIS: I mean, the same is true for code, right? I deeply value the ability to make an absolute mess in my local editor and have nine different commits that eventually I throw two out. And then I revert that file, and then eventually, the PR that I put up that's my Instagram selfie. That's like, I carefully curated this, but what's behind the scenes it's just a pile of trash. So yeah, the ability to separate the creation and the editing that's a meaningful thing to have in life.
STEPH: Oh, I can't unsee that now. [laughs] A pull request is now the equivalent of that curated Instagram selfie. That is beautiful. [laughs]
CHRIS: To be clear, I don't think I've ever taken an Instagram selfie. But I get the idea, and I felt like it was an analogy that would work. Again, I try out analogies on this show, and many of them do not stick. But I think that one is all right.
STEPH: It might even go back to pairing because then you've got help in taking that picture. So hey, you're making a mess with somebody until you get that right perfect thing, and then you push it up for the world to see. So safe spaces for all the activities, I think that's the takeaway. On that note, shall we wrap up?
CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at hosts@bikeshed.fm via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.Sponsored By:Airbrake: Deploy fearlessly and fix bugs faster with Airbrake Error & Performance Monitoring. Airbrake notifiers are available for all major programming languages and frameworks, and install in minutes, with an open-source SDK-based install and near-zero technical debt. Spend less time tracking down bugs and more time developing. Visit Frictionless error monitoring and performance insight for your app stack.Support The Bike Shed

Jun 28, 2022 • 39min
344: Spinner Armageddon
Steph has an update and a question wrapped into one about the work that is being done to migrate the Test::Unit test over to RSpec.
Chris got to do something exciting this week using dry-monads. Success or failure?
This episode is brought to you by BuildPulse. Start your 14-day free trial of BuildPulse today.
Bartender
dry-rb - dry-monads v1.0 - Pattern matching
alfred-workflows
Raycast
ruby-science
Inertia.js
Remix
Become a Sponsor of The Bike Shed!
Transcript:
AD: Flaky tests take the joy out of programming. You push up some code, wait for the tests to run, and the build fails because of a test that has nothing to do with your change. So you click rebuild, and you wait. Again. And you hope you're lucky enough to get a passing build this time.
Flaky tests slow everyone down, break your flow, and make things downright miserable.
In a perfect world, tests would only break if there's a legitimate problem that would impact production. They'd fail immediately and consistently, not intermittently. But the world's not perfect, and flaky tests will happen, and you don't have time to fix all of them today. So how do you know where to start?
BuildPulse automatically detects and tracks your team's flaky tests. Better still, it pinpoints the ones that are disrupting your team the most. With this list of top offenders, you'll know exactly where to focus your effort for maximum impact on making your builds more stable. In fact, the team at Codecademy was able to identify their flakiest tests with BuildPulse in just a few days. By focusing on those tests first, they reduced their flaky builds by more than 68% in less than a month!
And you can do the same because BuildPulse integrates with the tools you're already using. It supports all of the major CI systems, including CircleCI, GitHub Actions, Jenkins, and others. And it analyzes test results for all popular test frameworks and programming languages, like RSpec, Jest, Go, pytest, PHPUnit, and more.
So stop letting flaky tests slow you down. Start your 14-day free trial of BuildPulse today. To learn more, visit buildpulse.io/bikeshed. That's buildpulse.io/bikeshed.
STEPH: What type of bird is the strongest bird?
CHRIS: I don't know.
STEPH: A crane.
[laughter]
STEPH: You're welcome. And on that note, shall we wrap up?
CHRIS: Let's wrap up.
[laughter]
Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey.
STEPH: And I'm Steph Viccari.
CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world?
STEPH: Hey, Chris, I saw a good movie I'd like to tell you about. It was just over the weekend. It's called The Duke, and it's based on a real story. I should ask, have you seen it? Have you heard of this movie called The Duke?
CHRIS: I don't think so.
STEPH: Okay, cool. It's a true story, and it's based on an individual named Kempton Bunton who then stole a particular portrait, a Goya portrait; if you know your artist, I do not. But he stole a Goya portrait and then essentially held at ransom because he was a big advocate that the BBC News channel should be free for people that are living on a pension or that are war veterans because then they're not able to afford that fee. But then, if you take the BBC channel away from them, it disconnects them from society. And it's a very good movie. I highly recommend it. So I really enjoyed watching that over the weekend.
CHRIS: All right. Excellent recommendation. We will, of course, add that to the show notes mostly so that I can find it again later.
STEPH: On a more technical note, I have a small update, or it's more of a question. It's an update and a question wrapped into one about the work that is being done to migrate the Test::Unit test over to RSpec. This has been quite a journey that Joël and I have been on for a while now. And we're making progress, but we're realizing that we're spending like 95% of our time in the test setup and porting that over, specifically because we're mapping fixture data over to FactoryBot, and we're just realizing that's really painful. It's taking up a lot of time to do that.
And initially, when I realized we were just doing that, we hadn't even really talked about it, but we were moving it over to FactoryBot. I was like, oh, cool. We'll get to delete all these fixtures because there are around 208 files of them. And so that felt like a really good additional accomplishment to migrating the test over.
But now that we realize how much time we're spending migrating the data over for that test setup, we've reevaluated, and I shared with Joël in the Slack channel. I was like, crap. I was like, I have a bad idea, and I can't not say it now because it's crossed my mind. And my bad idea was what if we stopped porting over fixtures to FactoryBot and then we just added the fixtures to a directory that RSpec would look so then we can rely on those fixtures? And then that way, we're literally then ideally just copying over from Test::Unit over to RSpec.
But it does mean a couple of things. Well, one, it means that we're now running those fixtures at the beginning of RSpec test. We're introducing another pattern of where these tests are already using FactoryBot, but now they have fixtures at the top, and then we won't get to delete the fixtures. So we had a conversation around how to manage and mitigate some of those concerns. And we're still in that exploratory. We're going to test it out and see if this really speeds us up referencing the fixtures.
The question that's wrapped up in this is there's something different between how fixtures generate data and how factories generate data. So I've run into this a couple of times now where I moved data over to just call a factory. But then I was hitting these callbacks or after-save-hooks or weird things that were then preventing me from creating the record, even though fixtures was creating them just fine. And then Joël pointed out today that he was running into something similar where there were private methods that were getting called. And there were all sorts of additional code that was getting run with factories versus fixtures. And I don't have an answer.
Like, I haven't looked into this. And it's frankly intentional because I was trying hard to not dive into understanding the mechanics. We really want to get through this. But now I'm starting to ponder a little more as to what is different with fixtures and factories? And I liked that factories is running these callbacks; that feels correct. But I'm surprised that fixtures doesn't, or at least that's the experience that I'm having.
So there's some funkiness there that I'd like to explore. I'll be honest; I don't know if I'm going to. But if anybody happens to know what that funkiness is or why fixtures and factories are different in that regard, I would be very intrigued because, at some point, I might look into it just because I would like to know.
CHRIS: Oh, that is interesting. I have not really worked with fixtures much at all. I've lived a factory life myself, and thus that's where almost all of my experience is. I'm not super surprised if this ends up being the case, like, the idea that fixtures are just some data that gets shoveled into the database directly as opposed to FactoryBot going through the model layer. And so it's sort of like that difference. But I don't know that for certain. That sounds like what this is and makes sense conceptually.
But I think this is what you were saying like, that also kind of pushes me more in the direction of factories because it's like, oh, they're now representative. They're using our model layer, where we're defining certain truths. And I don't love callbacks as a mechanism. But if your app has them, then getting data that is representative is useful in tests. Like one of the things I add whenever I'm working with FactoryBot is the FactoryBot lint rake task RSpec thing that basically just says, "Are your factories valid?" which I think is a great baseline to have.
Because you may add a migration that adds a default constraint or something like that to the database that suddenly all your factories are invalid, and it's breaking tests, but you don't know it. Like subtly, you change it, and it doesn't actually break a test, but then it's harder later. So that idea of just having more correctness baked in is always nice, especially when it can be automated like that, so definitely a fan of that. But yeah, interested if you do figure out the distinction.
I do like your take, though, of like, but also, maybe I just won't figure this out. Maybe this isn't worth figuring it out. Although you were in the interesting spot of, you could just port the fixtures over and then be done and call the larger body of work done. But it's done in sort of a half-complete way, so it's an interesting trade-off space. I'm also interested to hear where you end up on that.
STEPH: Yeah, it's a tough trade-off. It's one that we don't feel great about. But then it's also recognizing what's the true value of what we're trying to deliver? And it also comes down to the idea of churn versus complexity. And I feel like we are porting over existing complexity and even adding a smidge, not actual complexity but adding a smidge of indirection in terms that when someone sees this file, they're going to see a mixed-use of fixtures and factories, and that doesn't feel good.
And so we've already talked about adding a giant comment above fixtures that just is very honest and says, "Hey, these were ported over. Please don't mimic this. But this is some legacy tests that we have brought over. And we haven't migrated the fixtures over to use factories." And then, in regards to the churn versus complexity, this code isn't likely to get touched like these tests. We really just need them to keep running and keep validating scenarios. But it's not likely that someone's going to come in here and really need to manage these anytime soon. At least, this is what I'm telling myself to make me feel better about it.
So there's also that idea of yes, we are porting this over. This is also how they already exist. So if someone did need to manage these tests, then going to Test::Unit, they would have the same experience that they're going to have in RSpec. So that's really the crux of it is that we're not improving that experience. We're just moving it over and then trying to communicate that; yes, we have muddied the waters a little bit by introducing this other pattern. So we're going to find a way to communicate why we've introduced this other pattern, but that way, we can stay focused on actually porting things over to RSpec.
As for the factories versus fixtures, I feel like you're onto something in terms of it's just skipping that model layer. And that's why a lot of that functionality isn't getting run. And I do appreciate the accuracy of factories. I'd much rather know is my data representative of real data that can get created in the world? And right now, it feels like some of the fixtures aren't.
Like, how they're getting created seemed to bypass really important checks and validations, and that is wrong. That's not what we want to have in our test is, where we're creating data that then the rest of the application can't truly create. But that's another problem for another day. So that's an update on a trade-off that we have made in regards to the testing journey that we are on. What's going on in your world?
CHRIS: Well, we got to do something exciting this week. I was working on some code. This is using dry-monads, the dry-rb space. So we have these result objects that we use pretty pervasively throughout the app, and often, we're in a controller. We run one of these command objects. So it's create user, and create user actually encompasses a ton of logic in our app, and that object returns a result.
So it's either a success or a failure. And if it's a success, it'll be a success with that new user wrapped up inside of it, or if it's a failure, it's a specific error message. Actually, different structured error messages in different ways, some that would be pushed to the form, some that would be a flash message. There are actually fun, different things that we do there.
But in the controller, when we interact with those result objects, typically what we'll do is we'll say result equals create user dot run, (result=createuser.run) and then pass it whatever data it needs. And then on the next line, we'll say results dot either, (results.either), which is a method on these result objects. It's on both the success and failure so you can treat them the same. And then you pass what ends up being a lambda or a stabby proc, or I forget what they are. But one of those sort of inline function type things in Ruby that always feel kind of weird.
But you pass one of those, and you actually pass two of them, one for the success case and one for the failure case. And so in the success case, we redirect back with a notice of congratulations, your user was created. Or, in the failure case, we potentially do a flash message of an alert, or we send the errors down, or whatever it ends up being. But it allows us to handle both of those cases. But it's always been syntactically terrible, is how I would describe it. It's, yeah, I'm just going to leave it at that. We are now living in a wonderful, new world.
This has been something that I've wanted to try for a while. But I finally realized we're actually on Ruby 2.7, and so thus, we have access to pattern matching in Ruby. So I get to take it for a spin for the first time, realizing that we were already on the correct version. And in particular, dry-monads has a page in their docs specific to how we can take advantage of pattern matching with the result objects that they provide us. There's nothing specific in the library as far as I understand it. This is just them showing a bunch of examples of how one might want to do it if they're working with these result objects.
But it's really great because it gives the ability to interact with, you know, success is typically going to be a singular case. There's one success branch to this whole logic, but there are like seven different ways it can fail. And that's the whole idea as to why we use these command objects and the whole Railway Oriented Programming and that whole thing which I have...what is this word? [laughs] I feel like I should know it. It's a positive rant. I have raved; that is how our users kindly pointed that out to us. I have raved about the Railway Oriented Programming that allows us to do.
But it's that idea that they're actually, you know, there's one happy path, and there are seven distinct failure modes, seven unhappy paths. And now, using pattern matching, we actually get a really expressive, readable, useful way to destructure each of those distinct failures to work with the particular bits of data that we need. So it was a very happy day, and I got to explore it. This is, again, a feature of Ruby, not a feature of dry-monads. But dry-monads just happens to embrace it and work really well with it. So that was awesome.
STEPH: That is awesome. I've seen one or two; I don't know, I've seen a couple of tweets where people are like, yeah, Ruby pattern matching. I haven't found a way to use it. So I'm excited that you just shared a way that you found to use it. I'm also worried what it says about our developer culture that we know the word rant so well, but rave, we always have to reach back into our memory to be like, what's that positive word or something that we like? [laughs]
CHRIS: And especially here on The Bike Shed, where we try to gravitate towards the positive. But yeah, it's an interesting point that you make.
STEPH: We're a bunch of ranters. It's what we do, pranting ranters. I don't know why we're pranting. [laughs]
CHRIS: Because it's that exciting. That's what it is. Actually, there was an interesting thing as we were playing around with the pattern matching code, just poking around in the console session with it, and it prints out a deprecation warning. It's like, warning: this is an experimental feature. Do not use it, be careful. But in the back of my head, I was like, I actually know how this whole thing plays out, Ruby 2.7, and I assure you, it's going to be fine. I have been to the future, at least I'm pretty sure.
I think the version that is in Ruby 2.7 did end up getting adopted basically as it stands. And so, I think there is also a setting to turn off that deprecation warning. I haven't done it yet, but I mostly just enjoyed the conversation that I had with this deprecation message of like, listen, I've been to the future, and it's great. Well, it's complicated, but specific to this pattern matching [laughs] in Ruby 3+ versions, it went awesome. And I'm really excited about that future that we now live in.
STEPH: I wish we had that for so many more things in our life [laughs] of like, here's a warning, and it's like, no, no, I've seen the future. It's all right. Or you're totally right; I should avoid and back out of this now.
CHRIS: If only we could know how the things would play out, you know. But yeah, so pattern matching, very cool. I'll include a link in the show notes to the particular page in the dry-monads docs. But there are also other cool things on the internet. In an unrelated but also cool thing that I found this week, we use Tuple a lot within our organization for pair programming. For anyone who's not familiar with it, it's a really wonderful piece of technology that allows you to pair program pretty seamlessly, better video quality, all of those nice things that we want.
But I found there was just the tiniest bit of friction in starting a Tuple call. I know I want to pair with this person. And I have to go up and click on the little menu bar, and then I have to find their name, then I have to click a button. That's just too much. That's not how...I want to live my life at the keyboard. I have a thing called Bartender, which is a little menu bar manager utility app that will collapse down and hide the icons. But it's also got a nice, little hotkey accessible pop-up window that allows me to filter down and open one of the menu bar pop-out menus.
But unfortunately, when that happens, the Tuple window isn't interactive at that point. I can't use the arrow keys to go up and down. And so I was like, oh, man, I wonder if there's like an Alfred workflow for this. And it turns out indeed there is actually managed by the kind folks at Tuple themselves. So I was able to find that, install it; it's great. I have it now. I can use that.
So that was a nice little upgrade to my workflow. I can just type like TC space and then start typing out the person's name, and then hit enter, and it will start a call immediately. And it doesn't actually make me more productive, but it makes me happier. And some days, that's what matters.
STEPH: That's always so impressive to me when that happens where you're like, oh, I need a thing. And then you went through the saga that you just went through. And then the people who manage the application have already gotten there ahead of you, and they're like, don't worry, we've created this for you. That's one of those just beautiful moments of like, wow, y'all have really thought this through on a bunch of different levels and got there before me.
CHRIS: It's somewhat unsurprising in this case because it's a very developer-centric organization, and Ben's background being a thoughtbot developer and Alfred user, I'm almost certain. Although I've seen folks talking about Raycast, which is the new hotness on the quick launcher world. I started eons ago in Quicksilver, and then I moved to Alfred, I don't know, ten years ago. I don't know what time it is anymore.
But I've been in Alfred land for a while, but Raycast seems very cool. Just as an aside, I have not allowed myself... [laughs] this is another one of those like; I do not have permission to go explore this new tool yet because I don't think it will actually make me more productive, although it could make me happier. So...
STEPH: I haven't heard of that one, Raycast. I'm literally adding it to the show notes right now as a way so you can find The Duke later, and I can find Raycast later [chuckles] and take a look at it and check it out. Although I really haven't embraced the whole Alfred workflow. I've seen people really enjoy it and just rave about it and how wonderful it is. But I haven't really leaned into that part of the world; I don't know why. I haven't set any hard and fast rules for myself where I can't play around with these technologies, but I haven't taken the time to do it either.
CHRIS: You've also not found yourself writing thousands of lines of Vimscript because you thought that was a good idea. So you don't need as many guardrails it would seem. That's my guess.
STEPH: This is true.
CHRIS: Whereas I need to be intentional [laughs] with how I structure my interaction with my dev tools.
STEPH: Instead, I'm just porting over fixtures from one place to another. [laughs] That's the weird space that I'm living in instead. [laughs]
CHRIS: But you're getting paid for that. No one paid me for the Vimscript I wrote.
[laughter]
STEPH: That's fair. Speaking around process-y things, there's something that's been on my mind that Valeria, another thoughtboter, suggested around how we structure our meetings and the default timing that we have for meetings. So Thursdays are my team-focused day. And it's the day where I have a lot of one on ones. And I realized that I've scheduled them back to back, which is problematic because then I have zero break in between them, which I'm less concerned about that because then I can go for an hour or something and not have a break. And I'm not worried about that part.
But it does mean that if one of those discussions happens to go over just even for like two or three minutes, then it means that someone else is waiting for me in those two to three minutes. And that feels unacceptable to me. So Valeria brought up a really good idea where I think it's only with the Google Meet paid version. I could be wrong there. But I think with the paid version of it that then you can set the new default for how long a meeting is going to last.
So instead of having it default to 30 minutes, have it default to 25 minutes. So then, that way, you do have that five-minute buffer. So if you do go over just like two or three minutes with someone, you've still got like two minutes to then hop to the next call, and nobody's waiting for you. Or if you want those five minutes to then grab some water or something like that.
So we haven't implemented it just yet because then there's discussion around is this a new practice that we want everybody to move to? Because I mean, if just one person does it, it doesn't work. You really need everybody to buy into the concept of we're now defaulting to 25 versus 30-minute meetings. So I'll have to let you know how that goes. But I'm intrigued to try it out because I think that would be very helpful for me.
Although there's a part of me that then feels bad because it's like, well, if I have 30 minutes to chat with somebody, but now I'm reducing it to 25 minutes each time, I didn't love that I'm taking time away from our discussion. But that still feels like a better outcome than making somebody wait for three to five minutes if something else goes over. So have you ever run into something like that? How do you manage back-to-back meetings? Do you intentionally schedule a break in between or?
CHRIS: I do try to give myself some buffer time. I stack meetings but not so much so that they're just back to back. So I'll stack them like Wednesdays are a meeting-heavy day for me. That's intentional just to be like, all right, I know that my day is going to get chopped up. So let's just really lean into that, chop the heck out of Wednesday afternoons, and then the rest of the week can hopefully have slightly longer deep work-type sessions. And, yeah, in general, I try and have like a little gap in between them.
But often what I'll do for that is I'll stagger the start of the next meeting to be rather than on the hour or the half-hour, I start it on the 15th minute. And so then it's sort of I now have these little 15-minute gaps in my workflow, which is enough time to do one or two small things or to go get a drink or whatever it is or if things do run over. Like, again, I feel what you're saying of like, I don't necessarily want to constrain a meeting. Or I also don't necessarily want to go into the habit of often over-running.
I think it's good to be intentional. Start meetings on time, end meetings on time. If there's a great conversation that's happening, maybe there's another follow-up meeting that should happen or something like that. But for as nonsensical of a human as I believe myself to be, I am rather rigid about meetings. I try very hard to be on time. I try very hard to wrap them up on time to make sure I go to the next one. And so with that, the 15-minute staggering is what I've found works for me.
STEPH: Yeah, that makes sense. One-on-ones feels special to me because I wholeheartedly agree with being very diligent about like, hey, this is our meeting time. Let's do a time check. Someone says that at the end, and then that way, everybody can move on. But one on ones are, there's more open discussion space, and I hate cutting people off, especially because it might not be until the last 15 minutes that you really got into the meat of the conversation.
Or you really got somewhere that's a little bit more personal or things that you want to talk about. So if someone's like, "Yeah, let me tell you about my life goals," and you're like, "Oh, no, wait, sorry. We're out of time." That feels terrible and tragic to do. So I struggle with that part of it.
CHRIS: I will say actually, on that note, I'm now thinking through, but I believe this to be true. Everyone that reports to me I have a 45-minute one-on-one with, and then my CEO I set up the one-on-one. So I also made that one a 45-minute one-on-one. And that has worked out really well.
Typically, I try and structure it and reiterate this from time to time of, like, hey, this is your space, not mine. So let's have whatever conversation fits in here. And it's fine if we don't need to use the whole time, but I want to make sure that we have it and that we protect it. Because I often find much like retro, I don't know; I think everything's fine. And then suddenly the conversation starts, and you're like, you know what? Actually, I'm really concerned now that you mentioned it. And you need that sort of empty space that then the reality sort of pop up into.
And so with one on one, I try and make sure that there is that space, but I'm fine with being like, we can cut this short. We can move on from one-on-one topics to more of status updates; let's talk about the work. But I want to make sure that we lead with is there anything deeper, any concerns, anything you want to talk through? And sort of having the space and time for that.
STEPH: I like that. And I also think it speaks more directly to the problem I'm having because I'm saying that we keep running over a couple of minutes, and so someone else is waiting. So rather than shorten it, which is where I'm already feeling some pain...although I still think that's a good idea to have a default of 25-minute meetings so then that way, there is a break versus the full 30. So if people want to have back-to-back meetings, they still have a little bit of time in between.
But for one on ones specifically, upping it to 45 minutes feels nice because then you've got that 15-minute buffer likely. I mean, maybe you schedule a meeting, but, I don't know, that's funky. But likely, you've got a 15-minute buffer until your next one. And then that's also an area that I feel comfortable in sharing with folks and saying, "Hey, I've booked this whole 45 minutes. But if we don't need the whole time, that's fine."
I'm comfortable saying, "Hey, we can end early, and you can get more of your time back to focus on some other areas." It's more the cutting someone off when they're talking because I have to hop to the next thing. I absolutely hate that feeling. So thanks, I think I'll give that a go. I think I'll try actually bumping it up to 45 minutes, presuming that other people like that strategy too, since they're opting in [laughs] to the 45 minutes structure. But that sounds like a nice solution.
CHRIS: Well yeah, happy to share it. Actually, one interesting thing that I'm realizing, having been a manager at thoughtbot and then now being a manager within Sagewell, the nature of the interactions are very different. With thoughtbot, I was often on other projects. I was not working with my team day to day in any real capacity. So it was once every two weeks, I would have this moment to reconnect with them. And there was some amount of just catching up. Ideally, not like status update, low-level sort of thing, but sort of just like hey, what have you been working on? What have you been struggling with? What have you been enjoying?
There was more like I needed bigger space, I would say for that, or it's not surprising to me that you're bumping into 30 minutes not being quite long enough. Whereas regularly, in the one on ones that I have now, we end up cutting them short or shifting out of true one-on-one mode into more general conversation and chatting about Raycast or other tools or whatever it is because we are working together daily. And we're pairing very regularly, and we're all on the same project and all sorts of in sync and know what's going on. And we're having retro together. We have plenty of places to have the conversation.
So the one-on-one again, still, I keep the same cadence and the same time structure just because I want to make sure we have the space for any day that we really need that. But in general, we don't. Whereas when I was at thoughtbot, it was all the more necessary. And I think for folks listening; I could imagine if you're in a team lead position and if you're working very closely with folks, then you may be on the one side of things versus if you're a little bit more at a distance from the work that they're doing day to day. That's probably an interesting question to ask, and think about how you want to structure it.
STEPH: Yeah, I think that's an excellent point. Because you're right; I don't see these individuals. We may not have really gotten to interact, except for our daily syncs outside of that. So then yeah, there's always like a good first 10 minutes of where we're just chatting about life and catching up on how things are going before then we dive into some other things. So I think that's a really good point. Cool, solving management problems on the mic. I dig it.
In slightly different news, I've joined a book club, which I'm excited about. This book club is about Ruby. It's specifically reading the book Ruby Science, which is a book that was written and published by thoughtbot. And it requires zero homework, which is my favorite type of book club. Because I have found I always want to be part of book clubs. I'm always interested in them, but then I'm not great at budgeting the time to make sure I read everything I'm supposed to read. And so then it comes time for folks to get together. And I'm like, well, I didn't do my homework, so I can't join it.
But for this one, it's being led by Joël, and the goal is that you don't have to do the homework. And they're just really short sections. So whoever's in charge of leading that particular session of the book club they're going to provide an overview of what's covered in whatever the reading material that we're supposed to read, whatever topic we're covering that day. They're going to provide an overview of it, an example of it, so then we can all talk about it together. So if you read it, that's wonderful. You're a bit ahead and could even join the meeting like five minutes late. Or, if you haven't read it, then you could join and then get that update. So I'm very excited about it.
And this was one of those books that I'd forgotten that thoughtbot had written, and it's one that I've never read. And it's public for anybody that's interested in it. So to cover a little bit of details about it, so it talks about code smells, ways to refactor code, and then also common patterns that you can use to solve some issues. So there's a lot of really just great content that's in it. And I'll be sure to include a link in the show notes for anyone else that's interested.
CHRIS: And again, to reiterate, this book is free at this point. Previously, in the past, it was available for purchase. But at one point a number of years ago, thoughtbot set all of the books free. And so now that along with a handful of other books like...what's Edward's DNS book? Domain Name Sanity, I believe, is Edward's book name that Edward Loveall wrote when he was not a thoughtboter, [laughs] and then later joined as a thoughtboter, and then we made the book free.
But on the specific topic of Ruby Science, that is a book that I will never forget. And the reason I will never forget it is that book was written by the one and only CTO Joe Ferris, who is an incredibly talented developer. And when I was interviewing with thoughtbot, I got down to the final day, which is a pairing session. You do a morning pairing session with one thoughtbot developer, and you do an afternoon pairing session with another thoughtbot developer.
So in the morning, I was working with someone on actually a patch to Rails which was pretty cool. I'd never really done that, so that was exciting. And that went fine with the exception that I kept turning on Caps Lock on their keyboard because I was used to Caps Lock being CTRL, and then Vim was going real weird for me. But otherwise, that went really well. But then, in the afternoon, I was paired with the one and only CTO Joe Ferris, who was writing the book Ruby Science at that time.
And the nature of the book is like, here's a code sample, and then here's that code sample improved, just a lot of sort of side-by-side comparisons of code. And I forget the exact way that this went, but I just remember being terrified because Joe would put some code up on the screen and be like, "What do you think?" And I was like, oh, is this the good code or the bad code? I feel like I should know. I do not know. I'm not sure. It worked out fine, I guess. I made it through. But I just remember being so terrified at that point. I was just like, oh no, this is how it ends for me. It's been a good run.
STEPH: [laughs]
CHRIS: I made it this far. I would have loved to work for this nice thoughtbot company, but here we are. But yeah, I made it through. [laughs]
STEPH: There are so many layers to that too where it's like, well if I say it's terrible, are you going to be offended? Like, how's this going to go for me if I speak my truths? Or what am I going to miss? Yeah, that seems very interesting (I kind of like it.) but also a terrifying pairing session.
CHRIS: I think it went well because I think the code...I'd been following thoughtbot's work, and I knew who Joe was and had heard him on podcasts and things. And I kind of knew roughly where things were, and I was like, that code looks messy. And so I think I mostly got it right, but just the openness of the question of like, what do you think? I was like, oh God. [laughs] So yeah, that book will always be in my memories, is how I would describe it.
STEPH: Well, I'm glad it worked out so we could be here today recording a podcast together. [laughs]
CHRIS: Recording a podcast together. Now that I say all that, though, it's been a long time since I've read the book. So maybe I'll take a revisit. And definitely interested to hear more about your book club and how that goes.
But shifting ever so slightly (I don't have a lot to say on this topic.) but there's a new framework technology thing out there that has caught my attention. And this hasn't happened for a while, so it's kind of novel for me. So I tend to try and keep my eye on where is the sort of trend of web development going? And I found Inertia a while ago, and I've been very, very happy with that as sort of this is the default answer as to how I build websites.
To be clear, Inertia is still the answer as to how I build websites. I love Inertia. I love what it represents. But I'm seeing some stuff that's really interesting that is different. Specifically, Remix.run is the thing that I'm seeing. I mentioned it, I think, in the last episode talking about there was some stuff that they were doing with data loading and async versus synchronous, and do you wait on it or? They had built some really nice levers and trade-offs into the framework. And there's a really great talk that Ryan Florence, one of the creators of Remix.run, gave about that and showed what they were building.
I've been exploring it a little bit more in-depth now. And there is some really, really interesting stuff in Remix. In particular, it's a meta-framework, I think, is the nonsense phrase that we use to describe it. But it's built on top of React. That won't be true for forever. I think it's actually they would say it's more built on top of React Router. But it is very similar to Next.js for folks that have seen that. But it's got a little bit more thought around data loading. How do we change data? How do we revalidate data after?
There's a ton of stuff that, having worked in many React client-side API-heavy apps that there's so much pain, cache invalidation. How do you think about the cache? When do you fetch from the network? How do you avoid showing 19 different loading spinners on the page? And Remix as a framework has some really, I think, robust and well-thought-out answers to a lot of that. So I am super-duper intrigued by what they're doing over there. There's a particular video that I think shows off what Remix represents really well. It's Ryan Florence, that same individual, the creator of Remix, building just a newsletter signup page.
But he goes through like, let's start from the bare bones, simplest thing. It's just an input, and a form submits to the server. That's it. And so we're starting from web 2.0, long, long ago, sort of ideas, and then he gradually enhances it with animations and transitions and error states. And even at the end, goes through an accessibility audit using the screen reader to say, "Look, Remix helps you get really close because you're just using web fundamentals."
But then goes a couple of steps further and actually makes it work really, really well for a screen reader. And, yeah, overall, I'm just super impressed by the project, really, really intrigued by the work that they're doing. And frankly, I see a couple of different projects that are sort of in this space. So yeah, again, very early but excited.
STEPH: On their website...I'm checking it out as you're walking me through it, and on their website, they have "Say goodbye to Spinnageddon." And that's very cute. [laughs]
CHRIS: There's some fundamental stuff that I think we've just kind of as a web community, we made some trade-offs that I personally really don't like. And that idea of just spinners everywhere just sending down a ball of application logic and a giant JavaScript file turning it on on someone's computer. And then immediately, it has to fetch back to the server. There are just trade-offs there that are not great. I love that Remix is sort of flipping that around.
I will say, just to sort of couch the excitement that I'm expressing right now, that Remix exists in a certain place. It helps with building complex UIs. But it doesn't have anything in the data layer. So you have to bring your own data layer and figure out what that means. We have ActiveRecord within Rails, and it's deeply integrated. And so you would need to bring a Prisma or some other database connection or whatever it is. And it also doesn't have more sort of full-featured framework things. Like with Rails, it's very easy to get started with a background job system. Remix has no answer to that because they're like, no, no, this is what we're doing over here.
But similarly, security is probably the one that concerns me the most. There's an open conversation in their discussion portal about CSRF protection and a back and forth of whether or not Remix should have that out of the box or not. And there are trade-offs because there are different adapters that you can use for auth. And each would require their own CSRF mitigation. But to me, that is the sort of thing that I would want a framework to have.
Or I'd be interested in a framework that continues to build on top of Remix that adds in background jobs and databases and all that kind of stuff as a complete solution, something more akin to a Rails or a Laravel where it's like, here we go. This is everything. But again, having some of these more advanced concepts and patterns to build really, really delightful UIs without having to change out the fundamental way that you're building things.
STEPH: Interesting. Yeah, I think you've answered a couple of questions that I had about it. I am curious as to how it fits into your current tech stack. So you've mentioned that you're excited and that it's helpful. But given that you already have Rails, and Inertia, and Svelte, does it plug and play with the other libraries or the other frameworks that you have? Are you going to have to replace something to then take advantage of Remix? What does that roadmap look like?
CHRIS: Oh yeah, I don't expect to be using Remix anytime soon. I'm just keeping an eye on it. I think it would be a pretty fundamental shift because it ends up being the server layer. So it would replace Rails. It would replace the Inertia within the stack that I'm using. This is why as I started, I was like, Inertia is still my answer. Because Inertia integrates really well with Rails and allows me to do the sort of it's not progressive enhancement, but it's like, I want fancy UI, and I don't want to give up on Rails. And so, Inertia is a great answer for that. Remix does not quite fit in the same way. Remix will own all of the request-response lifecycle.
And so, if I were to use it, I would need to build out the rest of that myself. So I would need to figure out the data layer. I would need to figure out other things. I wouldn't be using Rails. I'm sure there's a way to shoehorn the technologies together, but I think it sort of architecturally would be misaligned. And so my sense is that folks out there are building...they're sort of piecing together parts of the stack to fill out the rest.
And Remix is a really fantastic controller and view from their down experience and routing layer. So it's routing, controller, view I would say Remix has a really great answer to, but it doesn't have as much of the other stuff. Whereas in my case, Inertia and Rails come together and give me a great answer to the whole story.
STEPH: Got it. Okay, that's super helpful.
CHRIS: But yeah, again, I'm in very much the exploratory phase. I'm super intrigued by a lot of what I've seen of it and also just sort of the mindset, the ethos of the project as it were. That sounds fancy as I say it, but it's what I mean. I think they want to build from web fundamentals and then enhance the experience on top of that, and I think that's a really great way to go. It means that links will work. It means that routing and URLs will work by default.
It means that you won't have loading spinner Armageddon, and these are core fundamentals that I believe make for good websites and web applications. So super interested to see where they go with it. But again, for me, I'm still very much in the Rails Inertia camp. Certainly, I mean, I've built Sagewell on top of it, so I'm going to be hanging out with it for a while, but also, it would still be my answer if I were starting something new right now.
I'm just really intrigued by there's a new example out there in the world, this Remix thing that's pushing the envelope in a way that I think is really great. But with that, my now…what was that? My second or my third rave? Also called the positive rant, as we call it. But yeah, I think on that note, what do you think? Should we wrap up?
STEPH: Let's wrap up.
CHRIS: The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at hosts@bikeshed.fm via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.Sponsored By:BuildPulse: Flaky tests take the joy out of programming. You push up some code, wait for the tests to run, and the build fails because of a test that has nothing to do with your change. So you click rebuild and you wait. Again. And you hope you're lucky enough to get a passing build this time.
Flaky tests slow everyone down, break your flow and make things downright miserable.
In a perfect world, tests would only break if there's a legitimate problem that would impact production. They'd fail immediately and consistently, not intermittently. But the world's not perfect, and flaky tests will happen, and you don't have time to fix them all today. So how do you know where to start?
BuildPulse automatically detects and tracks your team's flaky tests. Better still, it pinpoints the ones that are disrupting your team the most. With this list of top offenders, you'll know exactly where to focus your effort for maximum impact on making your builds more stable. In fact, the team at Codecademy was able to identify their flakiest tests with BuildPulse in just a few days. By focusing on those tests first, they reduced their flaky builds by more than 68% in less than a month!
And you can do the same because BuildPulse integrates with the tools you're already using. It supports all the major CI systems, including CircleCI, GitHub Actions, Jenkins, and others. And it analyzes test results for all popular test frameworks and programming languages, like RSpec, Jest, Go, pytest, PHPUnit, and more.
So stop letting flaky tests slow you down. Start your 14-day free trial of BuildPulse today. To learn more, visit buildpulse.io/bikeshed. That's buildpulse.io/bikeshed.Support The Bike Shed

Jun 21, 2022 • 31min
343: Opt-In To Oversharing
Chris is weathering through a slight lull, a holding period, where his team waits for new features to become available with some of the platforms they integrate with, and as they think out new facets of the platform they're building.
Steph has been thinking recently about working in isolation. It's a topic that Joël Quenneville pointed out to her and mentioned. Can engineers work in isolation and be successful?
Become a Sponsor of The Bike Shed!
Transcript:
CHRIS: Always be singing.
STEPH: I can't remember if I've shared the story with you. But I had a beautiful little human moment with someone at airport security. Because when I travel with my mic, I always get stopped because there's the middle long, thin piece that looks like what you would screw on to a gun like for a silencer. And so [laughs] as he was going through, the person was looking at it, and then he called over a buddy. And then they called over another buddy, and there's like three TSA agents all looking at the X-ray screen. And finally, they're like, "Yeah, we need to flag it." So they moved it over.
And then he was digging through, and he pulled out the big metal piece. And I said, "It's for a microphone." And he's like, "Okay," and he kept looking, and then he finally found the microphone. And he lit up because I guess he wasn't really sure to believe me at first when I said it. But he lit up, and he was like, "Karaoke?" [laughs] I was like, "No, it's for podcasting."
CHRIS: But not 100% no because we do sing plenty on this show, so...
STEPH: I think that's what made me think of it. It was your singing. [laughs]
CHRIS: Yep. My wonderful, wonderful singing.
STEPH: Hello and welcome to another episode of The Bike Shed, a weekly Podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari.
CHRIS: And I'm Chris Toomey.
STEPH: And together, we're here to share a bit of what we've learned along the way. So, hey, Chris? What's new in your world?
CHRIS: What's new in my world? We are in sort of a...what's the word? There's a bit of a lull right now, not like a big lull, but we had a bunch of clear work that came into the team, did a bunch of iterations, some testing, built some new features, et cetera. And now there's a small holding period basically where we wait for some new features to become available with some of the platforms that we integrate with and also as we think out some new facets of the platform that we're building.
So we've got this little bit of time here where we're not necessarily building out as many new novel features. But instead, as a dev team, we're taking this moment to be like, oh, cool, let's tie down. I want to make a sailing analogy here, but I don't know sailing. It's like tie down the somethings and batten the hatches, maybe. That sounds like a thing. [chuckles] But so we have a couple of projects going right now. We want to really accept the truth and lean into Sidekiq. So right now, we have a mix of ActiveJob and Sidekiq jobs. And they're confusing, and et cetera, et cetera.
So we want to kind of lean into that, upgrade dependencies, that sort of thing. We are, again, doing a little bit of work on the observability foundation of our system. so how do we know what's going on at runtime? Also, working on just some core features and functionality. We have done a little bit of an exploration into the event processing stuff, some of that that I've been talking about. It's actually been very interesting. So we're working with Customer.io as a platform, which is omnichannel communication behavior-based messaging sort of thing. So when a user does X, send them an email and then wait three days. And if they haven't responded, then do this other thing.
And I think I've said this in previous episodes; I'm so wildly impressed with that platform. They have done such a good job. And I know that good software doesn't happen in a vacuum. In fact, if we're being honest, a lot of the software out there is not very good. And not only do they do a good job, but it's across...there's a ton of functionality in Customer.io.
And it's interesting because we're finding ourselves leaning into it even more because it is such a solid platform and because it connects into our event system. Like, it's a segment destination, so all of our analytics events get piped into Customer.io, and then we can action on any of them. And the actions can be quite complicated.
And this is where we're getting into good idea, terrible idea space. And to be clear, this is still just an exploration. But we basically wanted a way to do more. There are a bunch of different actions that you can take so, like send an email, send an SMS, or there are a couple of other slightly fancier ones. You can trigger an event within the Customer.io system. You can actually do an arbitrary HTTP POST, PUT, PATCH, whatever, any web requests you want to make. So if you want to integrate with essentially anything else out there, you can do that. You can send some structured data over the wire.
And so we've now been like, okay, what if, and stay with me here, what if we use our analytic system and we send events whenever a user does something, and then that event eventually trickles down to Customer.io? Within that, we allow ourselves to respond to that event by emitting a different event within the system, within Customer.io. And then, via the webhook functionality, we fire that back to the Rails application. And then there we can do whatever we want.
And in a way, that sounds absurd because we're starting from our app, and then we're sending some events down, processing them in certain ways, sending it back to the app, and then maybe doing something. In particular, one of the things we want to do is richly formatted Slack alerts. And Customer.io has a Slack alert functionality, but they can't have any of the fancy stuff. They can't link to our customer in the admin dashboard. So we found that that functionality is particularly useful for our admin team. And so we're like, ah, this feels weird.
But if we were to do this loop out and back, then ideally, we get the power of Customer.io for non-technical users or non-engineering team users to configure workflows and to say, "When a user does this, I actually want to alert the admin team via Slack." And we want it to be rich and have buttons that you can click and all that kind of stuff. And although the thing that I just described seems complicated, is a word that I'll use for it, confusing at times, it isn't...like, I don't want to do all of that in the app.
I don't want the app to have to think about how do I wait three days? We technically can do that with Sidekiq, but it gets us in trouble and whatnot, whereas Customer.io that's a core concept for them. And so, again, very much exploration. This will probably be a future good idea, terrible idea segment. But that's been an interesting one to explore.
STEPH: You have quite a talent for you preface something as a bad idea, and you do a very good job of making it sound reasonable and good. [laughs] So it's interesting to be on that side of like, good idea, bad idea. It's like, I'm looking for the bad. And I have questions, but overall, [chuckles] you do a very good job of being very thoughtful and walking through why it makes sense or what are the benefits of it. So you answered some of my questions around why still send it to Customer.io versus just having it all in-house. So the fact that the admin team has access to it makes a lot of sense.
I want to clarify one point. So when you send it to Customer.io, Customer.io then needs to send a message back to your application. And then that's when you customize the Slack message. Do you need Customer.io to send that message, or could you just fire off an event to Customer.io to say, "Hey, capture this, but don't do anything with this. And then we're going to send the Slack message because we want to customize it."?
CHRIS: I think the key is that we want to leverage the fact that Customer.io is the platform that our operations team really is now becoming comfortable with and using for this behavioral automation workflow type logic. So that idea of when this event, you know, when this triggering event happens, if this condition is true, then respond in this way.
And so because Customer.io is the platform that A, is quite good at that and B, is where our admin team is now thinking about doing that, one thing that we might do let's say a user completes some action within the application. So they fill out a form to submit their interest in some new platform feature. Initially, what we might want to do there is alert ourselves to say, "Hey, this happened. Take some action." And then eventually, we may want to instead switch that over and send an email to the customer with the next steps that they need to do.
And the ability to gradually transition across that spectrum is really interesting to me, and again, Customer.io being the platform, sort of the hub for how we respond to these events. At the same time, I know that this feels like a generic message processing system that might be a Kafka queue somewhere else. And so I've got that in the back of my head of like, is this weird? I think it's a little weird. But it also, thus far as we're exploring it, is very approachable for the admin team, very familiar for them, and reasonably powerful.
And also, there's a drag and drop editor for the events and the payloads. And it knows for this event, here's the stuff that's available to you. And so the ability for our admin team to interact with that interface is really great. And we don't have to build it. We don't have to think about it. But I will say I've worked at so many different companies that have their ad hoc system that makes it easy to do generic X, Y, and Z. And it's bad, and it falls down. And it's impossible to know when anything happens.
And so, I've got a lot of concerns in the back of my head, which I will want to at least think through and understand the trade-offs that we're making if we pursue this path, but it is very interesting to me. So right now, a lot of this logic does live in the app. But it means that it requires a code change for anything that we want to do like this. We want to have a Slack alert whenever X happens. Now, the developers are in the loop for all of that. And really, it's the operations team that owns the decisioning on that.
And so if they can also self-serve and instrument the action, the alert, the follow-up, the whatever it is, if we can give them those primitives in a platform that they already understand, that sounds nice. I'm intrigued, is what I'll say. So anyway, while we're in this lull period, we are trying out some fun stuff like that and exploring those sorts of things.
STEPH: I like that perspective that you're putting on it, or at least the one that's standing out to me is the concept of ownership is like who gets to own these actions. But then beyond that, that's the part where I feel a little squirmy is, so we are using this third-party tool because it makes life easier. But then, at what point when we start building software around this third-party tool to then customize it back on our own side. Then if someone is in Customer.io, so if an admin user is in there and then they trigger an event, is there going to be confusion as to what's going to happen? And can they retry an event?
Because I'm realizing my initial suggestion where it was like, hey, notify Customer.io that this is there but then also manage sending the Slack message that would prevent them from being able to have that retry capability. And that may be very much worth preserving. So then it's understood that hey, if you want to manage this, we are giving you full access to manage this work. We may customize it, but this is still the interface in which you go through to have three tries or to manage that workflow or these actions that get sent to users.
CHRIS: Yeah. I think you've perfectly highlighted the why this might not be a great idea or at least the concerns to explore before adopting this more thoroughly. And even just the idea of adopting it more thoroughly, like, how tied into the system are we? How business-critical does this new external piece of software become? Because I've seen that to be really problematic where there are organizations that I've worked with that are like, "Oh God, we would love to move off of system X. But unfortunately, it's basically the one thing holding this business up." And I'm like, yeah, I get that. And that happens. So yeah, being really intentional with that.
And that's why we're very much in an exploration place. But we have a bunch of stuff that we've done that required engineering work. And we're now seeing like, actually, could we map this into this other tool? And can we build the set of primitives in that space that now this team can own that whole experience? And then critically, can they debug it? Will we know when something goes wrong, et cetera? Those are always parts. At this point, I don't think I can just imagine a happy path. And I hope this isn't true for the rest of my life.
But the work as a software developer, especially after having done a couple of rounds of it and as a consultant, I just imagine failure modes. It's all I do. I'll be like, okay, we just need to wire X up to Z, and then we need to fire off a request. And then, once we get the message back, then we can process them. I'm like, right. You just described 13 things that can go wrong. Now let's imagine each of the different failure states because that's all I'm going to do.
Who cares about the happy path? Those are easy. Those write themselves. It's all of the failure modes that I need to think about. And someday, when I retire, and I go to a log cabin in the woods, and I don't talk to people for a while, maybe I'll go back to a place of only happy paths. But that is not my truth right now.
STEPH: I can't tell you how many people in my personal life I have annoyed so much [laughs] because all I see are failure modes. And one, that's a delightful t-shirt. [laughs] I'd love to have that. And then yeah, I feel you because there are so many times where someone is...like, I'm with someone who's like a big idea person. And so they're just launching into what-ifs, and we did this. And I can't help it, and I have learned to help it.
But it has been a struggle with some strong feedback from family and friends to reel it in. Because then I will start to think through okay, well, what's the details? And I have some questions. What happens when this happens? And yeah, all I see are failure modes. [laughs] It is very true for me too, and not always...not so great. So I, too, shall get a log cabin one day and try to forget all of that.
CHRIS: I will say I painted that as a particularly glib version of myself. But some of what I'm doing right now, particularly joining an early-stage startup and taking the role of CTO, was very much to try and intentionally resist that. Because right now, I have to be really careful with how much of the potential edge cases and whatnot. I'm considering exactly how robust of a platform are we building? Very is the answer. But what about extremely? Because extremely is an option but extremely costs four times as much. Mostly in time being the critical element there.
And so part of the work that I'm doing now is just trying to push on those edges, push on those boundaries, find the places where we can move quickly, and still build a robust platform because frankly, we're building...Sagewell is a financial platform under the hood, and I can't be flippant with that. We as a team have to be really careful with the thing that we're building. But we also have to move quickly.
We have to be able to iterate. We have to be able to build something and try it out and see if it works. And then, if it doesn't, maybe shelve it and pull it out of the codebase. And it has been a real challenge, but it was the challenge that I wanted here. And so I've been enjoying that work, but it has been a stretch, a growth moment, let's call it.
STEPH: I don't know if you've shared that particular goal with me in transitioning to a CTO role, but I really, really like it. One, it's very aligned with who you are. You're very thoughtful, and you look for areas to push and ways to do that. And then I also struggle in those areas, and thoughtbot specifically and consulting has helped push me in directions, push me out of my comfort zones but still in a safe space where I have other people to talk to as I'm making those decisions and pushing past the comfort areas that I have.
But one of them is that I will initially think things have to be perfect or really planned. And I had a really nice conversation with Chad Pytel, who is one of the Founders of thoughtbot and also COO and host of the Giant Robots Smashing Into Other Giant Robots Podcast. And we were chatting about a new offering that thoughtbot is bringing to the market. And it's one that I've been involved with. And I started getting really in the weeds of like, but we really have to plan out how this is going to look and all the actions that need to take place before then we can really sell this type of engagement to a new client.
And as I was going through this list of worries, when I was done, he mentioned he's like, "All of those are valid and something to consider." He's like, "But we don't have any customers yet." So the first part is we feel that we are in a space that we have enough of information to get started. And it's something that we've done before. And then, we'd like to see where customers align with us on this need because we're going to end up shaping this work in response to what their needs are. And so, we can't really begin that shaping until we understand more of what people are looking for.
I was like, oh yeah, that's such a nice point. It just reminded me in regard to pushing those boundaries of yes, we need planning upfront, and we look for failure modes. But then there's also an important aspect of then finding ways to keep moving forward and getting more feedback and then balancing those two.
CHRIS: Yeah, I think that's definitely right the as always, anchoring it to the customer. What is it that they need? How do we connect with them and hear from them? And ideally, keep those feedback loops as short as possible. That's the game, and everything else fits around that. But yeah, so we're trying some stuff. We'll see how it goes. I will certainly report back, depending on how it plays out. But that's a little bit of what's up in my world. What's up in your world?
STEPH: I have been thinking recently about working in isolation. It's a topic that Joël Quenneville, who's another thoughtboter and has been on the show a number of times, it was a topic that he'd actually pointed out to me and mentioned. And so, I wanted to bring that here and share it with you because I'd love to get some of your thoughts on this as well. But I've typically had the viewpoint that when developers are sent off to work on a large, nebulous task, that it's a recipe for disaster, and almost everyone's going to lose in that scenario. And it tends to be a combination of isolation, very distant due dates, and loosely defined scope that leads to those really poor results.
However, as developers, it's not inconceivable for us to land in that position. And it's very similar to my current project, who I'm working with Joël on, where we were given a very fuzzy project with some really aggressive goals, and the engagement is going really well. So that led Joël and I to wonder why is this working? This is the thing that we said that people should never do, but it's actually going quite well for us.
So reflecting upon some of the things that are working well for us, even though we are in more of an isolated state than we would typically work, some of the things that I've been reflecting on or some of the strategies I should say that we've applied to this situation is number one, we did work hard to plug into an existing team. So when we joined, we joined more of an ad hoc volunteer team.
And in everybody's spare time, those individuals were then contributing to the CI process in terms of trying to speed things up and improve things for the rest of the team. But otherwise, there wasn't really a team. There wasn't much structure to it. So it felt like everybody was very much off in their own world doing their own thing, occasionally putting up some code changes for review. And then you had to gain a lot of context to understand what it was that they were doing.
So one of the things that I advocated for early on that I thought was more of just my personal preference but I think has actually worked well in regards to the success of the project as well is to plug into an existing team. So even if you are not working with that team on their day-to-day tasks, but you want to have more people to interact with and more people to share your context with. So you are essentially reducing the isolation of you're no longer these two people who are off in a corner working on something, and nobody has any idea what you're doing, and only one person is getting a status update.
There is now a whole channel or team of people that have some insight as to what's going on. And they can also really unblock you for when you get stuck because then if you do have a question, but there's that one person who has been like your go-to person for this whole project, if they're out on vacation, or if they leave, or just something happens, you're suddenly blocked.
And you don't know who to go to because you've been part of this larger company, but you haven't interacted with anybody outside of that one person. So at least if you're plugged into another team, you've immediately got some friends or some other people to go to and say, "Hey, I'm not sure who can help me with this, but I have this problem." And then, from there, you can get more help.
CHRIS: This is super interesting. To start, I really like that you're framing this in terms of this is a thing that we often recommend against or see as an anti-pattern, and yet in this particular case, it's working. Let's look at that. Because I think the things that you're like, huh, that's interesting. That phrase "Huh, that's interesting" is very interesting. It often highlights like oh, something is behaving counter to how we would expect it to, so let's dig in and explore that. And so I love that that was the reaction and then sort of the conversation that spilled out of that.
I'm also not super surprised that the combination of you and Joël were able to find a way to make this successful because you are two of the most capable developers that I've worked with but also particularly excellent communicators and advocates for the work that you're doing and the way that one should do the work. So the idea that there's a situation that may not be the ideal mode of working and that you're able to take that and say, "What if we shift it just a little bit and make it a little bit more manageable and whatnot?" So unsurprised, frankly, that you found a way collectively to make this a little bit better.
And then I think yeah, it sounds like you're doing the things...so just like, we're in isolation, hmm, that doesn't seem great. Let's unisolate and connect to some people, and that just feels so true. I'm very interested to hear, though. I'm guessing there's more to this story or other things that you've done. Are there other tactics or ways that you've shifted this around?
STEPH: Yeah, there's a couple more. So this is one that (And thank you for the kind words.) this was one that I think Joël is really exceptional at. So Joël is really good at building diagrams and graphs and then sharing that with the team as sort of like we've spent a couple of days understanding this big, messy concept. Here's a nice condensed graph that shows how we went about understanding this. And then here's the big overall picture of what we've learned from this, which has been wonderful for so many reasons.
And every time that we share something with the team, one, it just helps build camaraderie, especially in remote days, it just builds camaraderie on hey, we're all online. And we're working. And here's the thing that I'm working through or struggling with or something that I learned. I often do that, especially when I get frustrated and something goes wrong. I love to share the I did this today. It went terribly. [laughs] Let me tell you about it, so you're aware of it in case it helps you.
And specifically, the diagrams are really nice because then other people can just see and appreciate it, or they can point something out that we didn't know. Or they'll see a different angle because they're more familiar with the system. So they can say, "Oh yeah, that totally makes sense," or "I had no idea that was happening." So that's been a really nice way to engage with the team.
And so, essentially, the little title for that strategy is just overshare. Just share all the things that you're doing and find ways to make it digestible for the team so then they can go along on this big, nebulous journey with you. And you can also put it in threads so that way, you're not flooding a channel, but then people can opt-in to that oversharing if they would like more insight into the work that you're doing.
CHRIS: Opt-in to that oversharing. [laughs]
STEPH: Exactly. I mean, it's not forced oversharing; it's just it's here if people would like it. That was a really nice compliment that some other thoughtboters received from their client team is someone had mentioned that there's so much information that's getting shared from the thoughtboters that they had trouble keeping up. And they really liked that. They really appreciated that they could then go check out this channel or these threads and see exactly the type of work that was happening and the outcomes of it. And then they could just check it maybe beginning of the day, end of the day and get that knowledge dump.
Some of the other strategies that we've used are giving ourselves mini-goals to accomplish as part of the larger, more nebulous task. So as we have this very large goal in mind, it's like, where's the small piece? Where's an entry point? What's a task or a goal that we can define? And then we want to break that down into what questions do we need to ask? How can we start moving in this direction? And we want to find something that has an answer.
So each time that we start researching once we've gotten to that point...and this is hard. I feel like people may know that, but I should just say that this is hard to take something nebulous and then find the entry point and break down some goals. And that has been one of the wonderful parts of then having a buddy for this type of project because then we can bounce ideas off of each other.
And we can also help the other person not go too deep into an area. Because I have definitely had moments where I've been very passionate about like, "We need to do this," and Joël is just like, do we? And I'm like, "Yeah." And he's like, "Do we though?" And I'm like, "I guess not. I just really, really want to." [laughs] It's been very helpful to have a partner balance some of those feelings.
And once you can break down some of that amorphous problem into those smaller goals, then you can also create tickets, which is also a really nice way to then surface the work that you're doing. You can document how you're researching, document the question. And then once you have that question of what you're in search of, it's so nice because then once you find the answer, that's immediately a good moment to pause and reflect.
So I think in a recent episode, we were chatting about this where Joël and I were trying to understand why the tests weren't being balanced properly across each process that was available. And we found the answer, and we started immediately digging into fixing it or solutions. And then it took us a moment to go back and say, "Actually, this ticket is really just about understanding the problem, not fixing the problem." And so that was a nice; now that we understand the problem, let's go back high-level to define our next goal from this big, nebulous task because maybe fixing that balancing is the right thing to do, but maybe not, and we just need to reconsider.
So for that portion of breaking down a big, nebulous task and then identifying smaller tasks that you can achieve, time-boxing has been huge for us in regards of what’s something that we can accomplish this week, or what's something we can accomplish today that will then move us forward? And then making sure that we are setting deadlines for ourselves. So normally, this is another area where it's like, huh, that's interesting. I'm a big believer in deadlines. But I do think self-imposed deadlines are really helpful.
CHRIS: I'm intrigued to hear you say that you're not a big fan of deadlines because I assume we're actually more aligned on this. But deadlines that are arbitrary and also come with fixed scope and other immovable things, yes, those are the worst in the world. But deadlines that we set for ourselves, and then we use that as a mechanism to hone and refine the scope that we're going to get out the door by that deadline, I find those incredibly useful. And that sounds like that's the same sort of thing you have going on here is like by saying we're willing to expend this much to get a result, that defines the work going into it.
STEPH: Yeah, that's fair. Everything that you said is true, too; in regards to, I'm realizing I default that when I hear the word deadline, I'm so used to teams having deadlines that are defined by other individuals that are not part of the work. And as you said, the scope has already been defined, and it can't be changed. And it's all of the bad things that then go with it.
So when I think of deadlines, I immediately think of that type of deadline versus the more self-imposed, yes, we can revisit, yes, the team has bought in and understands why this is important. Those types of deadlines are very helpful. It's that first part that I default to that I think of immediately, and I need some reassurance that that is not the type of deadline that I'm looking at or being forced to meet.
I have a very similar feeling for estimates. Like, those both fall in the same category for me is; as soon as I hear estimation and deadline, I get nervous. And then I just need to understand the purpose of both and who is setting both of those and the communication around them. And then what does that failure mode look like, the one that we're always looking for? So yeah, deadlines and estimations fit into that. Initially, I'm very hesitant and cautious, but I think they're both very good tools.
CHRIS: Yeah, I feel like those are very closely related. And they're definitely tools that can be used for great good or for great evil. And so, ideally, we advocate for the great good usage. But more generally, I love, again, the sharing around the process and what's worked for you in this less typical or often somewhat problematic workflow. I will say, again, so I gave you the series of compliments earlier, and I stand by those compliments for you and Joël.
But I think also the sort of related aspect is that you two are both quite senior, very capable, very comfortable suggesting changes, suggesting workflows. So I think the potential dangers of isolation are still very much there. And the fact that the two of you have been able to find a way to work more effectively and perhaps change the terms of things just a little bit to make this effective is A, unsurprising but B, not something that I would expect of every team.
I think you've described a wonderful list of the specifics as to how you did that. And ideally, if folks that are perhaps a little earlier on in their career are sent out for a month with a wild project, and they're sent to do it in isolation, hopefully, they can borrow from that list. But again, I do think this is a thing that, from an organizational perspective, we should be very careful with when we're imposing this isolation on it because it takes two fantastic folks like you and Joël to break out of the shackles of it.
STEPH: The more we're talking about this, the more apparent it's also becoming that I started with this; how do you manage isolation? And my answer is you get out of it. [laughs] Get out of isolation as quickly as possible. Someone thought it was a good idea to put you there or a good idea to structure it that way. Or maybe they didn't mean it intentionally, but that's how things then shook out. So that's really what a lot of those strategies are about is, then how do I get myself out of this corner that you put me in? Because nobody put Stephanie in a corner.
So it's essentially that's all the strategies are looking for ways to say, hey, I'm isolated, but I really don't want to be, and it's dangerous for me to be isolated in this way. Even as a more senior capable developer, it's more likely that things could go wrong, and miscommunications, misaligned expectations. So I need to find ways to then bring the work that I'm doing to make it more relevant to other people on the team. So then we can have more overlap, or at least I can share a lot of the work that's being done.
CHRIS: Yeah, absolutely. I think with that wonderful summary and, frankly, utterly fantastic movie reference, what do you think? Should we wrap up?
STEPH: Let's do it. Let's wrap up.
CHRIS: The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at hosts@bikeshed.fm via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeeeee!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.Support The Bike Shed

Jun 14, 2022 • 44min
342: Sky Icing
Another toaster strudel debate?! Plus, the results are in for the most listened-to podcast in the RoR community! :: drum roll ::
Steph has a "Dear Gerrit" message to share. Chris has a follow-up on mobile app strategy.
The Bike Shed: 328: Terrible Simplicity
When To Fetch: Remixing React Router - Ryan Florence
Virtual Event - Save Time & Money with Discovery Sprints
Become a Sponsor of The Bike Shed!
Transcript:
STEPH: thoughtbot's next virtual event "Save Time & Money with Discovery Sprints" is coming up on June 17th, from 2 - 3 PM Eastern. It's a discussion with team members from product management, design and development. From a developer perspective, topics will include how to plan a product's architecture, both the MVP and future version, how to lead a tech spikes into integrations and conduct a build vs buy reviews of third party providers. Head to thoughtbot.com/events to register, the event is June 17th 2 - 3 PM ET. Even if you can't make it, registering will get you on the list for the recording.
CHRIS: We're the second-best. We're the second-best. Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey.
STEPH: And I'm Steph Viccari.
CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world?
STEPH: I'm very happy to report that I picked up a treat from the store recently. So while I was in Boston and we were hanging out in person, we talked about Pop-Tarts because that always comes up as a debate, as it should. And then also Toaster Strudels came up, so I now have a package of Toaster Strudels, and those are legit. Pop-Tart or Toaster Strudel, I am team Toaster Strudel, which I know you're going to ask me about icing and if I put it on there, so go ahead. I'm going to pause. [laughs]
CHRIS: It sounds like I don't even need to say anything. But yes, inquiring minds want to know.
STEPH: I think that's also my very defensive response because yes, I put icing on my Toaster Strudel.
CHRIS: How interesting. [laughs]
STEPH: But it feels like a whole different class of pastry. So I'm very defensive about my stance on Pop-Tarts with no icing put Strudel with icing.
CHRIS: A whole different class of pastry. Got it. Noted. Understood. So did you travel? Like, were these in your luggage that you flew back with?
STEPH: [laughs] Oh no. They would be all gooey and melty. No, we bought them when we got back to North Carolina. Oh, that'd be a pro move; just pack little individual Strudels as your airplane snack. Ooh, I might start doing that now. That sounds like a great airplane snack.
CHRIS: You got to be careful though if the icing, you know, if it's pressurized from ground level and then you get up there, and it explodes. And you gotta be careful. Or is it the reverse? It's lower pressure up in the plane. So it might explode.
STEPH: [laughs] Either way, it might explode.
CHRIS: Well, yeah. If you somehow buy a packet of icing that is sky icing that is at that pressure, and you bring it down, then...but if you take it up and down, I think it's fine. If you open it at the top, you might be in danger. If you open icing under the ocean, I think nothing's going to happen. So these are the ranges that we're playing with.
STEPH: I will be very careful sky icing and probably pack two so that way I have a backup just in case. So if one explodes, we'll be like, all right, now I know what I'm working with and be more prepared for the next one.
CHRIS: That's just smart.
STEPH: I try to make smart travel decisions, Toaster Strudels on the go. Aside from travel treats and sky icing, I have some news regarding Planet Argon, who is a Ruby on Rails consultancy regarding their latest published this year's Ruby on Rails community survey results. And so they list a lot of fabulous different topics in there. And one of them includes a learning section that highlights most listened to podcasts in the Ruby on Rails community as well as blogs and some other resources. And Bike Shed is listed as the second most listened to podcast in the Ruby on Rails community, so whoo, golf clap.
CHRIS: Fantastic.
STEPH: And in addition to that, the thoughtbot blog got a really nice shout-out. So the thoughtbot blog is in the number two spot for the most visited blogs in the community. In the first spot is Ruby Weekly, which is like, you know, okay, that feels fair, that feels good. So it's really exciting for the thoughtbot blog because a lot of people work really hard on curating and creating that content. So that's wonderful that so many people are enjoying it.
And then I should also highlight that for the podcast in first place is Remote Ruby, so congrats to Chris, Jason, and Andrew for grabbing that number one spot. And Brittany Martin, host of the Ruby on Rails Podcast, along with Brian Mariani, Jemma Issroff, and Nick Schwaderer, are in the number three spot. And some people say that Ruby is losing steam but look at all that content and all those highly ranked podcasts. I mean, we like Ruby so much we're spending time recording ourselves talking about it. So I say long live Ruby, long live Rails.
CHRIS: Yes. Long live Ruby indeed. And yeah, it's definitely an honor to be on the list and to be amongst such other wonderful shows. Certainly big fans of the work of those other podcasts. We even did a joint adventure with them at one point, and that was a really wonderful experience, so yeah, honored to be on the list alongside them. And to have folks out there in the world listening to our tech talk and nonsense always nice to hear.
STEPH: Yeah. You and I show up and say lots of silly things and technical things into the podcast. The true heroes are the ones that went and voted. So thank you to everybody who voted. That's greatly appreciated. It's really nice feedback. Because we get listener responses and questions, and those are wonderful because it lets us know that people are listening. But I have to say that having the survey results is also really nice. It lets us know people like the show.
Oh, but I did go back and look at some of the previous stats because then I was like, huh, so I'm paying attention. I looked at this year's, and I was like, I wonder what last year’s was or the year before that. And I think this survey comes out every two years because I didn't see one for 2021. But I did find the survey results for 2020, which we were in the number one spot for 2020, and Remote Ruby was in the second spot.
So I feel like now we've got a really nice, healthy podcasting war situation going on to see who can grab the first spot. We've got two years, everybody, to see who [laughs] grabs the number one spot. That's a lot of prep time for a competition.
CHRIS: Yeah, I feel like we should be like, I don't know, planning elaborate pranks on them or something like that now. Is that where this is at? It's something like that, I think.
STEPH: I think so. I think this is where you put like sky frosting inside someone's suitcase, and that's the type of prank that you play. [laughs]
CHRIS: The best of pranks.
STEPH: We'll definitely put together a little task force. And we'll start thinking of pranks that we all need to start playing on each other for the podcasting wars that we're entering for the next few years. But anywho, what's going on in your world?
CHRIS: Let's see, what's going on in my world? A fun thing happened recently. I had a chance to reflect back on some architectural choices that we've made in the Sagewell platform. And one of those specific choices is how we've approached building our native mobile apps. We made what some listeners may remember is an interesting set of choices. In particular, in Episode 328, which we'll include a link to in the show notes, I shared with you the approach that we're doing, which is basically like, Inertia is great, web user great. We like the web as a platform. What if we were to wrap it in a native shell and find this interesting and somewhat unique hybrid trade-off point?
And so, at that point, we were building it. We had most of it built out, and things were going quite well. I think we maybe had the iOS app in the store and the Android app approaching the store or something like that. At this point, both apps have been released to the store, so they are live. Production users are signing in. It's wonderful. But I had a moment in the past couple of weeks to reassess or look at that set of choices and evaluate it. And thankfully, I'm happy with the choices that we've made. So that's good.
But to get into the specifics, there were two things that happened that really, really framed the choice that we made, so one was we introduced a major new feature. We basically overhauled the first-run experience, the onboarding that users experience, and added a new, pretty fundamental facet to the platform. It's a bunch of new screens, and flows, and error states, and all of this complexity. And in the process, we iterated on it a bunch. Like, first, it looked like this, and then we changed the order of the screens and switched out the error messages, and et cetera, et cetera.
And I'll be honest, we never even thought about the mobile apps. It just wasn't even a consideration. And interestingly, we did as a final check before going fully live and releasing this out to the full production audience; we did spot check it in the mobile apps, and it didn't work. But it didn't work for a very specific, boring, technical reason that we were able to resolve. It has to do with iframes and WebViews and embedded something, something. And we had to set a flag. Thankfully, it was solvable without a deploy of the native mobile apps. And otherwise, we never thought about the native apps.
Specifically, we were able to add this fundamental set of features to our platform. And they just worked in native mobile. And they were the same as they roughly are if you're on a mobile WebView or if you're on a desktop web, you know, slightly different in terms of form factor. But the functionality was all the same. And critically, the error states and the edge cases and the flow, there's so much to think about when you're adding a nontrivial feature to an app. And the fact that we didn't have to consider it really spoke to the choice that we made here.
And again, to name it, the choice that we made is we're basically just reusing the same WebViews, the same Rails controllers, and the same what are Svelte components under the hood but the same essentially view layer as well. And we are wrapping that in a native iOS. It's a Swift application shell, and on Android, it's a Kotlin application shell. But under the hood, it's the same web stuff. And that was really great.
We just got these new features. And you know what? If we have to rip that whole set of functionality out, again, we won't need to deploy. We won't need to rethink it. Or, if we want to subtly tweak it, we can do that. If we want to think about feature flags or analytics, or error states or error reporting, all of this just naturally falls out of the approach that we took. And that was really wonderful.
STEPH: That's super nice. I also love this saga of like, you made a choice, and then you're coming back to revisit and share how it's going. So as someone who's never done this before, in regards of wrapping an application in the manner that you have and then publishing it and distributing it that way, what does that process look like? Is this one of those like you run a command, and literally, it's going to wrap the application and then make it hostable on the different mobile app stores? Or what's that? Am I oversimplifying the process? What does that look like?
CHRIS: I think there are a lot of platforms or frameworks I think would probably be the better word like Capacitor is something that comes to mind or Ionic or Expo. There are a handful of them that are a little more fully featured in what they provide. So you just point us at your React Views and whatnot, and we'll wrap that up, and it'll be great. But those are for, I may be overgeneralizing here, but my understanding is those are for more heavy client-side bundles that are talking to a common API. And so you're basically taking your same rich client-side application and bundling that up for reuse on the native app, the native app platforms.
And so I think those do have some release to the store sort of thing. In our case, we went a little bit further with that integration wrapper thing that we built. So that is a thing that we maintain. We have a Sagewell iOS repo and a Sagewell Android repo. There's a bunch of Swift and Kotlin code, respectively, in each of them, and we deploy to the stores manually. We're doing that whole process. But critically, the code that is in each of those repositories is just the bridge glue code that says, oh, when this Inertia navigation event happens, I'm going to push a WebView to the navigation stack. And that's what that is.
I'm going to render the tab bar of buttons at the bottom with the navigation elements that I get from the server. But it's very much server-driven UI, is the way that I would describe it. And it's wrapping WebViews versus actually having the whole client bundle wrapped up in the thing. It's unfortunately subtle to try and talk through on the radio, but yeah. [laughs]
STEPH: You're doing great; this is helping. So if there's a change that you want to make, you go to the Rails application, and you make that change. And then do you need to update anything on that iOS repo? It sounds like you don't, which then you don't have to push a new update to the store.
CHRIS: Correct. For the vast majority of things, we do not need to make any changes. It's very rare for us to deploy the iOS or the Android app is a different way to put it or to push new releases to the store. It happens we may want to add a new feature to the sort of bridge layer that we built, but increasingly, those are rare. And now it's basically like, yeah, we're just wrapping those WebViews, and it's going great. And again, to name it, it's a trade-off. It's an intentional trade-off that we've made.
We're never going to have the richest, most deep platform integration, smooth experience. We are making a small trade-off on that front. But given where we're at as an organization, given how early we are, how much iteration and change, we chose an architecture that optimizes for that change. And so again, like what you just said, yeah, I can...you know how it's really nice to be able to deploy six times a day on a web app, and that's a very straightforward thing to do? It is not so straightforward in the native mobile world. And so, we now have afforded ourselves the ability to do that.
But critically, and this is the fun part in my mind, have the trade-offs in the controls. So if we were just like, it's just a WebView, and that's it, and we put it in the stores, and we're done, that is too far of an extreme in my mind. I think the performance trade-offs, the experience trade-offs, it wouldn't feel like a native app like in a deep way, in a problematic way. And so as an example, we have a navigation bar at the top of our app, particularly on iOS, that is native iOS navigation. And we have a tab bar at the bottom, which is native tab UI element. I forget actually what it's called, but it's those elements.
And we hide the web application navigation when we're in the mobile context. So we actually swap those out and say, like, let's actually promote these to formal native functionality. We also, within our UI on the web, have a persistent button in the top right corner of your screen that says, "Need help? Reach out to your retirement advocate." who is the person that you get to work with. You can send questions, et cetera, et cetera. It's this little help sidebar drawer thing that pops out. And we have that as a persistent HTML button in the top corner of the web frame.
But when we're on native, we push that up as a distinct element in the native UI section. And then again, the bridge that I'm talking about allows for bi-directional communication between the JavaScript side and the native side or the native side and the JavaScript side. And so it's those sorts of pieces that have now afforded us all of the freedom to tinker, and we don't need to re-release where we're like, oh, we want to add a new weird button that does a thing in the WebView when you click on a button outside the WebView. We now just have that built-in.
STEPH: Yeah, I really like the flexibility that you're describing. When you promoted those elements to be more native-friendly so, like the navigation or the footer or the little get help chat, is that something that then your team implemented in like the iOS or the Kotlin repo? Okay, I see you nodding, but other people can't see that, so...[laughs]
CHRIS: Yeah. I was going to also say the words, but yes, those are now implemented as native parts. So the thing that we built isn't purely agnostic decoupled. It is Sagewell-specific; a lot of it is low-level. Like, let's say we want to wrap an Inertia app in a native mobile wrapper. Like, 90% of the code in it is that, but then there are little bits that are like, and put a button up there. And that button is the Sagewell button. And so it's not entirely decoupled from us. But it mostly is this agnostic bridge to connect things together.
STEPH: Yeah, the way you're describing it sounds really nice in terms of you're able to get out the app quickly and have a mobile app quickly that works on both platforms, and then you're still able to deploy changes without having to push that. That was always my biggest mental, or emotional hurdle with the idea of mobile development was the concept of that you really had to batch everything together and then submit it for review and approval and then get it released.
And then you got to hope people then upgrade and get the newest version. And it just felt like such a process, not that I ever did much of it. This was all just even watching like the mobile team and all the work that they had to do. And I had sympathy pains for them. But the fact that this approach allows you to avoid a lot of that but still have some nice, customized, more native elements. Yeah, I'm basically just recapping everything you said because I like all of it.
CHRIS: Well, thank you, friend. Like I said, I've really enjoyed it, and similar to you, I'm addicted to the feedback loop of the web. It's beautiful. I can deploy ten times or however many I want. Anytime I want, I can push out a new version. And that ability to iterate, to test, to explore, to tweak, to not have to do as much formal testing upfront because I'm terrified that if a bug sneaks out, then, it'll take me two weeks to address it; it just is so, so freeing. And so to give that up moving into a native context.
Perhaps I'm fighting too hard to hold on to my dream of the ability to rapidly iterate. But I really do believe in that and especially for where we're at as an organization right now. But, and a critical but here, again, it's a trade-off like anything else. And recently, I happened to be out about in the town, and I decided, oh, you know what? Let me open up the app. Let me see what it's like. And I wasn't on great internet. And so I open the app, and it loads because, you know, it's a native app, so it pops up.
But then the thing that actually happened is a loading spinner in the middle of the screen and sort of a gray nothing for a little while until the server request to fetch the necessary UI elements to render the login screen appeared. And that experience was not great. In particular, that experience is core to the experience of using the app every single time. Every time you use it, you're going to have a bad time because we're re-downloading that UI element. And there's caching, and there's things that could happen there to help with that. But fundamentally, that experience is going to be a pretty common one. It's the first thing that you experience when you're opening the app.
And so I noticed that and I chatted with the team, and I was like, hey, I feel like this is actually something that fixing this I think would really fundamentally move us along that spectrum of like, we've definitely made some trade-offs here. But overall, it feels snappy and like a native app. And so, we opted to prioritize work on a native login screen for both platforms. This also allows us to more deeply integrate. So particularly, we're going to get biometric logins like fingerprints or face scans, or whatever it is.
But critically, it's that experience of like, I open the Sagewell native app on my iOS phone, and then it loads immediately. And then I show it my face like we do these days, and then it opens up and shows me everything that I want to see inside of it. And it's that first-run experience that feels worth the extra effort and the constraints. Because now that it's native mobile, that means in order to change it, we have to do a deploy, not a deploy, release; that's what they call it in the native world. [laughs] You can tell I'm well-versed in this ecosystem.
But yeah, we're now choosing that trade-off. And what I really liked about this sort of set of things like the feature that we were able to just accidentally get for free on native because that's how this thing is built. And then likewise, the choice to opt into a fully native login screen like having that lever, having that control over I'm going to optimize for iteration generally, but where it's important, we want to optimize for performance and experience. And now we have this little slider that we can go back and forth.
And frankly, we could choose to screen by screen just slowly replace everything in the app with true native WebViews backed by APIs. And we could Ship of Theseus style replace every element of the app with true native mobile things until none of the old bridge code exists. And our users, in theory, would never know. Having that flexibility is really nice given the trade-off and the choice that we've made.
STEPH: You said a word there that I missed. You said ship something style.
CHRIS: Ship of Theseus.
STEPH: What is that?
CHRIS: It's like an old biblical story, I want to say, but it's basically the idea of, like, you have the ship. And then some boards start to rot out, so replace those boards. And then the mast breaks, you replace the mast. And slowly, you've replaced every element on the ship. Is it still the same ship at that point? And so it's sort of a philosophical question. So if we replace every single view in this app with a native view, is it still the same map? Philosophers will philosophize about it forever, but whatever. As long as we get to keep iterating and shipping software, then I'm happy.
STEPH: [laughs] Y’all philosophize. That's that word, right?
CHRIS: Yeah.
STEPH: And do your philosopher thing. We'll just keep building and shipping.
CHRIS: I don't know if I pronounced it right. It's like either Theseus or Theseus, and I'm sure I said the wrong one. And now that I've said the other, I'm sure both of them are wrong somehow. It's like a USB where there's up and down, and yet somehow it takes three tries. So anyway, I may have mispronounced it, and I may be misattributing it, but that's the idea I was going for.
STEPH: Well, given I wasn't even familiar with the word until just now, I'm going to give both pronunciations a thumbs up. I also really like how you decided that for the login screen, that's the area that you don't want people to wait because I agree if you're opening an application or opening...maybe it's the first time, maybe it's the 100th time. Who knows? But that feels important. Like, that needs to be snappy. I need to know it's responsive. And it builds trust from the minute that I clicked on that application. And if it takes a long time, I just immediately I'm like, what are y'all doing? Are y'all real? Do you know what you're doing over there?
So I like how you focused on that experience. But then once I log in, like if something is slow to log me in, I will make up excuses for the application all day where I'm like, well, you know, maybe it's my connection. It's fine. I can wait for the next screen to load. That feels more reasonable. And it doesn't undermine my trust nearly as much as when I first click on the app. So that feels like a really nice trade-off as well, or at least a nice area that you've improved while still having those other trade-offs and benefits that you mentioned.
CHRIS: To highlight it, you used a phrase there which I really liked. Like, it's building trust. If something's a little bit off in that first run experience every single time, then it kind of puts a question in the back of your head, maybe not even consciously. But you're just kind of looking at it, and you're like, what are you doing there? What are you up to, friend? Humans say to the apps they use on their phone. That's normal, right? When you talk...
But to name it, we've also done a round of performance work throughout the app. And so there are a couple of layers to it. But it was work that we had planned for a while, but we kept deferring. But now that we're seeing more usage of the native apps, the native apps experience the same surface area of performance stuff but all the more so because they may be on degraded network connections, et cetera.
And so this is another example where this whole thing kind of pays off. The performance work that we did affects everything. It affects the web. It's the same under the hood. It's let's reduce the network requests that we're making in the payloads that we're sending, particularly the network requests to upstream things, so like the banking partner that we're using and those APIs, like, collating all the data to then render the screen.
Because of Inertia, we only have a single sort of back and forth conversation via the API as opposed to I think it's pretty common to have like seven different APIs and four different spinners on the screen. We're not doing that, none of that on my watch. [chuckles] But we minimize the background calls to the other parties that we're integrating with. And then, we reduce the payload of data that we're sending on each request.
And each of those were like, we had to think about things and tweak and poke, but again it's uniform. So mobile web has that now, desktop web has that now. Android, iOS, they all just inherited it sort of that just happened one day without a deploy or release, without a release of either of the native mobile apps. We did deploy to the web to make that happen, but that's easy. I can do that a bunch of times a day.
One last thing I want to share as we're on this topic of trade-offs and levers, there was a really great conference talk that I watched recently, which was Ryan Florence of remix.run also React Router fame if you're familiar with him from that. But he was talking about the most recent version of Remix, which is their meta framework on top of React.
But they've done some really interesting stuff around processing data, fetching data, when and how to sequence that. And again, that thing that I talked about of nine different loading spinners on the screen, Remix is taking a very different approach but is targeting that same thing of like, that's not great for user experience. Cumulative layout shift being the actual number that you can monitor for this.
But in that talk, there are features that they've added to Remix as a framework where you can just decide, like, do we wait for this or do we not? Do we make sure we have all of the data, or do we say, you know what? Actually, this is going to be below the fold. So it's okay to defer loading this until after we send down the first payload. And then we'll kick in, and we'll do it from the client-side. But it's this wonderful feature of the framework that they're adding in where there's basically just a keyword that you can add to sort of toggle that behavior.
And again, it's this idea of like trade-offs. Are we okay with more layout shift, or are we okay with more waiting? Which is it that we're going to optimize for? And I really love that idea of putting that power very simply in the hands of the developers to make those trade-off decisions and optimize over time for what's important. So we'll share a link to that talk in the show notes as well.
But it was very much in the same space of like, how do I have the power to decide and to change my mind over time? That's what I want. But yeah, with that, I think that's enough of me updating on the mobile app. I'll continue to share as new things happen. But again, I'm at this point very happy with where we're at. So yeah, it's been fun. But yeah, what else is up in your world?
STEPH: I have a dear Gerrit message that I wrote earlier, so I want to share that with you. Gerrit is the system that we're using for when we push up code changes that then manages very similar in the competitive space of like GitHub and GitLab, and Bitbucket. And so the team that I'm working with we are using Gerrit. And Gerrit and I, you know, we get along for the most part. We've managed to have a working relationship. [chuckles] But this week, I wrote my dear Gerrit letter is that I really miss being able to tell a story with my commit messages. That is the biggest pain that I'm feeling right now.
So for anyone that's less familiar or if you already are familiar with Gerrit, each change that Gerrit shows represents a single commit that's under review. And each change is identified by a Change-Id. So the basic concept of Gerrit is that you only have one commit per review. So if you were to translate that to GitHub terminology, every pull request is only going to have one commit, and so you really can't push up multiple.
And so, where that has been causing me the most pain is I miss being able to tell a story. So like even simple stories that are like, hey, I removed something that's not used. I love separating that type of stuff into its own commit just so then people can see that as they're going through review. Now, before I merge, I'm likely to squash, and that doesn't feel important that it needs to be its own commit. That's really just for the reviewer so they can follow along for the changes.
But the other one, I can slowly get over that one. Because essentially, the way I get around that is then when I do push up my code for review, is I then go through my change request, and then I just add comments. So I will highlight that line and say, "Hey, I'm removing this because it's not in use." And so, I found a workaround for that one.
But the one I haven't found a workaround for is that I don't push up my local work very often because I love having lots of local, tiny, green commits so that way I can know the progress that I'm at. I know where I'm headed. Also, I have a safe space to roll back to, but then that means that I may have five or six commits that I have locally, but I haven't pushed up somewhere. And that is bothering me more and more hour by hour the more I think about it that I can't push stuff up because it makes me nervous.
Because, I mean, usually, at least by the end of the day, I push everything up, so it's stored somewhere. And I don't have to worry about that work disappearing. Now I am working on a dev machine. So there is that aspect of it's technically...it's not even on my local machine. It is stored somewhere that I should still be able to access.
CHRIS: What's a dev machine? The way you're saying it, it sounds like it's a virtual machine, not like a laptop. But what's a dev machine?
STEPH: Good question. So the dev machine is a remote server or remote machine that then I am accessing, and then that's where I'm performing. That's where I'm writing all of my work. And then that's also kind of the benefit is everything is not local; it's controlled by the team. So then that also means that other teams, other individuals can help set up these environments for future developers.
So then you have that consistency across everyone's working with the same Rails version, or gems, or has access to the same tools. So in that sense, my work isn't just on my laptop because then that would really worry me because then I've got nowhere...it's not backed up anywhere. So at least it is somewhere it's being stored that then could be accessed by someone.
So actually, now, as I'm talking this through, that does help alleviate my concern about this a bit. [laughs] But I still miss it; I still miss being able to just push up my work and then have multiple commits. And I looked into it because I was like, well, maybe I'm misunderstanding something about Gerrit, and there's a way around this. And that's still always a chance. But from the research that I've done, it doesn't seem to be. And there are actually two very fiery takes that I saw that I have to share because they made me laugh.
When I was Googling, the question of like, "Can I push up multiple commits to one single Gerrit CR? Or is there just a way to, like, can I have this concept of like a branch and then I have many commits, but then I turn it into one CR? Whatever the world would give me. What do they have? [laughs] I'm laughing just looking at this now. One of the responses was, have you tried squashing your commits into one commit? And I was like, [laughs] "Yeah, that's not what I had in mind, but sure."
And then the other one, this is the more fiery take. They were very defensive about Gerrit, and they wrote that "People who don't like Gerrit usually just hack shit together. They cut corners and love squashing commits or throwing away history. And those people hate Gerrit. Developers who care love it. It's definitely possible and easy to produce agile software." And I just...that made me laugh. I was like, cool, I'm a developer that cuts corners and loves squashing commits. [laughs]
CHRIS: So you don't care is what that take says.
STEPH: I'm a developer who does not care.
CHRIS: You know, Steph, I've worked with you for a while. And I've been looking for the opportunity to have this hard conversation with you. But I just wish you cared a little more about the software that you're writing, about the people that you're working with, about the commits that you're authoring. I just see it in every facet of your work. You just don't care. To be very clear for anyone listening at home, that is the deepest of sarcasm that I can make. Steph cares so very much. It's one of the things that I really enjoy about you.
STEPH: I mean, we had the episode about toxic traits. This would have been the perfect time to confront me about my lack of caring about software and the processes that we have. So winding down on that saga, it seems to be the answer is no, friend; I cannot push up multiple commits. Oh, I tried to hack it. I am someone that tries to hack shit together because I tried to get around it just to see what would happen. [laughs] Because the docs had suggested that each change is identified by a Change-Id. And I was like, hmm, so what if there were two commits that had the same Change-Id, would Gerrit treat those as patch sets?
Because right now, when you push up a change, you can see all the different patch sets, so that's nice. So that's a nice feature of Gerrit as you can see the history of, like, someone pushed up this change. They took in some feedback. They pushed up a new change. And so that history is there for each push that someone has provided. And I wondered maybe if they had the same Change-Id that then the patch sets would show the first commit and then the second commit. And so I manually altered the commits two of them to reference the same Change-Id.
And I have to say, Gerrit was on to me because they gave me a very nice error message that said, "Same Change-Id and multiple changes. Squash the commits with the same Change-Ids or ensure Change-Ids are unique for each commit. And I thought, dang, Gerrit, you saw me coming. [laughs] So that didn't work either. I'm still in a world of where I now wait. I wait until I'm ready for someone to review stuff, and I have to squash everything, and then I go comment on my CRs to help out reviewers.
CHRIS: I really like the emotional backdrop that you provided here where you're spending a minute; you're like, you know what? Maybe it's me. And there's the classic Seymour Skinner principle from The Simpsons. Am I out of touch? No, it's the children who are wrong. [laughs] And I liked that you took us on a whole tour of that. You're like, maybe it's me. I’ll maybe read up. Nope, nope. So yeah, that's rough.
There's a really interesting thing of tools constraining you. And then sometimes being like, I'm just going to yield control and back away and accept this thing that doesn't feel right to me. Like, Prettier does a bunch of stuff that I really don't like. It shapes code in a way, and I'm just like, no, that's not...nope, you know what? I've chosen to never care about this again. And there's so much utility in that choice. And so I've had that work out really well. Like with Prettier, that's a great example whereby yielding control over to this tool and just saying, you know what? Whatever you produce, that is our format; I don't care. And we're not going to talk about it, and that's that.
That's been really useful for myself and for the teams that I'm on to just all kind of adopt that mindset and be like, yeah, no, it may not be what I would choose but whatever. And then we have nice formatted code; it's great. It happens automatically, love it. But then there are those times where I'm like; I tried to do that because I've had success with that mindset of being like, I know my natural thing is to try and micromanage and control every little bit of this code.
But remember that time where it worked out really well for me to be like, I don't care, I'm just going to not care about this thing? And I try to not care about some stuff, which it sounds like that's what you're doing right here. [laughs] And you're like, I tried to not care, but I care. I care so much. And now you're in that [chuckles] complicated space. So I feel for you, Steph. I'm sorry you're in that complicated space of caring so much and not being able to turn that off [laughs] nor configure the software to do the thing you want.
STEPH: I appreciate it. I should also share that the team that I'm working with they also don't love this. Like, they don't love Gerrit. So when I shared in the Slack channel my dear Gerrit message, they're both like, "Yeah, we feel you. [laughs] Like, we're in the same spot," which was also helpful because I just wanted to validate like, this is the pain I'm feeling. Is someone else doing something clever or different that I just don't know about? And so that was very helpful for them to say, "Nope, we feel you. We're in the same spot. And this is just the state that we're in."
I think they have started transitioning some other repos over to GitLab and have several repos in Gitlab, but this one is still currently using Gerrit. So they very much commiserate with some of the things that I'm feeling and understand. And this does feel like one of those areas where I do care deeply.
And frankly, this is one of those spaces that I do care about, but it's also like, I can work around it. There are some reasonable things that I can do, and it's fine as we just talked through. Like, the fact that my commits are not just locally on my machine already makes me feel better now that I've really processed that. So there are lower risks. It is more of just like a workflow. It's just, you know, it's crushing my work vibe.
CHRIS: Harshing your buzz.
STEPH: In the great words of Queen Elsa, I gotta let it go. This is the thing I'm letting go. So that's kind of what's going on in my world. What else is going on in your world?
CHRIS: Well, first and foremost, fantastic reference and segue. I really liked that. But yeah, let's see, [laughs] what else is going on in my world? We had an interesting thing happen last week. So we had an outage on the platform last week. And then we had an incident review today, so a formal sort of post-mortem incident review. There are a couple of different names that folks have given to these. But this is a practice that we want to build within our engineering culture is when stuff goes wrong, we want to make sure that we have meaningful conversations around to try to address the root causes.
Ideally, blameless is a word that gets used often in this context. And I've heard folks sort of take either side of that. Like, it's critical that it's blameless so that it doesn't feel like it's an attack. But also, like, I don't know, if one person did something, we should say that. So finding that gentle middle ground of having honest, real conversations but in a context of safety. Like, we're all going to make mistakes. We're all going to ship bugs; let's be clear about that. And so it's okay to sort of...anyway, that's about the process.
We had an outage. The specific outage was that we have introduced a new process. This is a Sidekiq process to work off a specific queue. So we wanted that to have discrete treatment. That had been running, and then it stopped running; we still don't know why. So we never got to the root-root cause. Well, we know what the mechanism was, which was the dyno count for that process was at zero. And so, eventually, we found a bunch of jobs backed up in the Sidekiq admin. We're like, that's weird.
And then, we went over to Heroku's configuration dashboard. And we saw, huh, that's weird. There are zero dynos processing this. That wasn't true yesterday. But unfortunately, Heroku doesn't log or have an audit trail around changes to those process counts. It's just not available. So that's unfortunate. And then the actual question of like, how did this happen? It probably had to be someone on the team. So there is like, someone did a thing. But that is almost immaterial because, again, people are going to do things, bugs will get shipped, et cetera.
So the conversation very quickly turned to observability and understanding. I think we've done a pretty good job of instrumenting error reporting and being quite responsive to that, making sure the signal-to-noise ratio is very actionable. So if we see a bug or a Sentry alert come through, we're able to triage that pretty quickly, act on it where it is a real bug, understand where it's a bit of noise in the system, that sort of thing. But in this case, there were no errors. There was no Sentry. There was nothing; there was the absence of something.
And so it was this really interesting case of that's where observability, I think, can really come in and help. So the idea of what can we do here? Well, we can monitor the count of jobs backed up in Sidekiq queue. That's one option. We could do some threshold alerting around the throughput of processed events coming from this other backend. There are a bunch of different ways, but it basically pushed us in the direction of doubling down and reinforcing the foundation of our observability within the platform.
So we're just kicking that mini-project off now, but it is something we're like, yeah, we feel like we could add some here. In particular, we recently added Datadog to the stack. So we now have Datadog to aggregate our logs and ideally do some metric analysis, those sort of things, build some dashboards, et cetera. I haven't explored Datadog much thus far. But my sense is they've got the whiz-bang things that we need here. But yeah, it was an interesting outage. That wasn't fun. The incident conversation was actually a good conversation as a team. And then the outcome of like, how do we double down on observability? I'm actually quite excited for.
STEPH: This is a fun moment for me because I have either joined teams that didn't have Datadog or have any of that sort of observability built into their system or that sort of dashboard that people go to. Or I've joined teams, and they already have it, and then nobody or people rarely look at it. And so I'm always intrigued between like what's that catalyst that then sparked a team to then go ahead and add this? And so I'm excited to hear you're in that moment of like, we need more observability. How do we go about this?
And as soon as you said Datadog, I was like, yeah, that sounds nice because then it sounds like a place that you can check on to make sure that everything is still running. But then there's still also that manual process where I'm presuming unless there's something else you have in mind. There's still that manual process of someone has to check the dashboard; someone then has to understand if there's no count, no squiggly lines, that's a bad thing and to raise a concern.
So I'm intrigued with my own initial reaction of, like, yeah, that sounds great. But now I'm also thinking about it still adds a lot of...the responsibility is still on a human to think of this thing and to go check it. Versus if there's something that gets sent to someone to alert you and say like, "Hey, this queue hasn't been processed in 48 hours. There may be a concern that actually feels nicer." It feels safer.
CHRIS: Oh yeah, definitely. I think observability is this category of tools and workflows and whatnot. But I think what you're describing of proactive alerting that's the ideal. And so it would be wonderful if I never had to look at any of these tools ever. And I just knew if I got, let's say, it's PagerDuty connected up whatever, and I got a push notification from PagerDuty saying, "Hey, go look at this thing." That's all I ever need to think about. It’s like, well, I haven't gotten a PagerDuty in a while, so everything must be fine, and having a deep trust in that.
Similar to like, if we have a great test suite and it's green, I feel confident deploying the sort of absence of an alert being the thing that I can trust. But right now, we're early enough in this journey that I think what we need to do is stand up a bunch of these different graphs and charts and metric analysis and aggregations and whatnot, and then start to squint at it for a while and be like, which of these would I be really concerned if it started to wibble?
And then you can figure the alerting around said wibble rate. And that's the dream. That's where we want to get to, but I think we've got to crawl, walk, run on this. So it'll be an adventure. This is very much the like; we're starting a thing. I'll tell you about it more when we've done it. But what you're describing is exactly what we want to get to.
STEPH: I love wibble rate. That's my new measurement I'm going to start using for everything. It's funny, as you're bringing this up, it's making me think about the past week that Joël Quenneville and I have had with our client work. Because a somewhat similar situation came up in regards where something happened, and something was broken. And it seemed it was hard to define exactly what moment caused that to break and what was going on.
But it had a big impact on the team because it essentially meant none of the bills were going through. And so that's a big situation when you got 100-plus people that are pushing up code and expecting some of the build processes to run. But it was one of those that the more we dug into it, the more it seemed very rare that it would happen.
So, in this case, as a sort of a juxtaposition to your scenario, we actually took the opposite approach of where we're like; this is rare. But we did load up a lot of contexts. Actually, I was thinking back to the advice that you gave me in a previous episode where I was talking about at what point do you dig in versus try to stay at surface level? And this was one of those, like, we've spent a couple of days on getting context for this and understanding. So it felt really important and worthwhile to then invest a little bit more time to then document it.
But then we still went with the simplest approach of like, this is weird. It shouldn't happen again. We think we understand it but then let's add a little bit of documentation or wiki page around like, hey, if you do run into this, here are some steps that will fix everything. And then, if you need to use this, let somebody know because this is so odd it shouldn't happen. So we took that approach in this case where we didn't increase the observability. It was more like we provided a fire extinguisher very close to the location in case it happens. And so that way, it's there should the need arise, but we're hoping it just never gets used.
We're also in the process of changing how a lot of that logic works. So we didn't really want to optimize for observability into a system that is actively being changed because it should look very different in upcoming months. But overall, I love the conversations that you bring about observability, and I'm excited to hear about what wibble rates you decide to add to your Datadog dashboard.
CHRIS: There's a delicate art and science to the selection of the wibble rates. So I will certainly report back as we get into that work. But with that, shall we wrap up?
STEPH: Let's wrap up.
CHRIS: The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at hosts@bikeshed.fm via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success. Support The Bike Shed

Jun 7, 2022 • 35min
341: Fundamentals and Weird Stuff
Steph and Chris are recording together! Like, in the same room, physically together.
Chris talks about slowly evolving the architecture in an app they're working on and settling on directory structure. Steph's still working on migrating unit tests over to RSpec.
They answer a listener question: "As senior-level developers, how do you set goals to ensure that you keep growing?"
This episode is brought to you by BuildPulse. Start your 14-day free trial of BuildPulse today.
Faking External Services In Tests With Adapters
Testing Third-Party Interactions
Jen Dary - On Future Goals
Charity Majors - The Engineer Manager Pedulum
Charity Majors Bike Shed Episode
Become a Sponsor of The Bike Shed!
Transcript:
STEPH: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari.
CHRIS: And I'm Chris Toomey.
STEPH: And together, we're here to share a bit of what we've learned along the way. So, hey, Chris, what's new in your world?
CHRIS: What is new in my world? Actually, this episode feels different. There's something different about it. I can't quite put my finger on it. I think it may be that we're actually physically in the same room recording for the first time in two years and a little bit more, which is wild.
STEPH: I can't believe it's been that long. I feel like it wasn't that long ago that we were in The Bike Shed...oh, I said The Bike Shed studio. I'm being very biased. Our recording studio [laughs] is the more proper description for it. Yeah, two and a half years. And we tried to make this happen a couple of months ago when I was visiting Boston, and then it just didn't work out. But today, we made it.
CHRIS: Today, we made it. Here we are. So hopefully, the audio sounds great, and we get all that more richness in conversation because of the physical in-person manner. We're trying it out. It'll be fun. But let's see, to the normal tech talk and nonsense, what's new in my world? So we've been slowly evolving the architecture in the app that we're working on. And we settled on something that I kind of like, so I wanted to talk about it, directory structure, probably the most interesting topic in the world. I think there's some good stuff in here.
So we have the normal stuff. There are app models, app controllers, all those; those make sense. We have app jobs which right now, I would say, is in a state of flux. We're in the sad place where some things are application records, and some things are Sidekiq workers. We have made the decision to consolidate everything onto Sidekiq workers, which is just strictly more powerful as to the direction we're going to go. But for right now, I'm not super happy with the state of app jobs, but whatever, we have that.
But the things that I like so we have app commands; I've talked about app commands before. Those are command objects. They use dry-rb do notation, and they allow us to sequence a bunch of things that all may fail, and we can process them all in a much more reasonable way. It's been really interesting exploring that, building on it, introducing it to new developers who haven't worked in that mode before. And everyone who's come into the project has both picked it up very quickly and enjoyed it, and found it to be a nice expressive mode. So app commands very happy with that.
App queries is another one that we have. We've talked about this before, query objects. I know we're a big fan. [laughs] I got a golf clap across the room here, which I could see live in person. It was amazing. I could feel the wind wafting across the room from the golf clap. [chuckles] But yeah, query objects, they're fantastic. They take a relation, they return a relation, but they allow us to build more complex queries outside of our models.
The new one, here we go. So this stuff would all normally fall into app services, which services don't mean anything. So we do not have an app services directory in our application. But the new one that we have is app clients. So these are all of our HTTP clients wrapping external third parties that we're interacting with. But with each of them, we've taken a particular structure, a particular approach. So for each of them, we're using the adapter pattern. There's a blog post on the Giant Robots blog that I can point to that sort of speaks to the adapter pattern that we're using here.
But basically, in production mode, there is an HTTP backend that actually makes the real requests and does all that stuff. And in test mode, there is a test backend for each of these clients that allows us to build up a pretty representative fake, and so we're faking it up before the HTTP layer. But we found that that's a good trade-off for us. And then we can say, like, if this fake backend gets a request to /users, then we can respond in whatever way that we want.
And overall, we found that pattern to be really fantastic. We've been very happy with it. So it's one more thing. All of them were just gathering in-app models. And so it was only very recently that we said no, no, these deserve their own name. They are a pattern. We've repeated this pattern a bunch. We like this pattern. We want to even embrace this pattern more, so long live app clients.
STEPH: I love it. I love app clients. It's been a while since I've been on a project that had that directory. But there was a greenfield project that I was working on. I think it might have been I was working with Boston.rb and working on giving them a new site or something like that and introduced app clients. And what you just said is perfect in terms of you've identified a pattern, and then you captured that and gave it its own directory to say, "Hey, this is our pattern. We've established it, and we really like it." That sounds awesome.
It's also really nice as someone who's new to a codebase; if I jump in and if I look at app clients, I can immediately see what are the third parties that we're working with? And that feels really nice. So yeah, that sounds great. I'm into it.
CHRIS: Yeah, I think it really was the question of like, is this a pattern we want to embrace and highlight within the codebase, or is this sort of a duplication but irrelevant like not really that important? And we decided no, this is a thing that matters. We currently have 17 of these clients, so 17 different third-party external things that we're integrating with. So for someone who doesn't really like service-oriented architecture, I do seem to have found myself in a place. But here we are, you know, we do what we have to with what we're given. But yes, 17 and growing our app clients.
STEPH: That is a lot. [laughs] My eyes widened a bit when you said 17. I'm curious because you highlighted that app services that's not really a thing. Like, it doesn't mean anything. It doesn't have the same meaning of the app queries directory or app commands or app clients where it's like, this is a pattern we've identified, and named, and want to propagate.
For app services, I agree; it's that junk drawer. But I guess in some ways...well, I'm going to say something, and then I'm going to decide how I feel about it. That feels useful because then, if you have something but you haven't established a pattern for it, you need a place for it to go. It still needs to live somewhere. And you don't necessarily want to put it in app models. So I'm curious, where do you put stuff that doesn't have an established pattern yet?
CHRIS: It's a good question. I think it's probably app models is our current answer. Like, these are things that model stuff. And I'm a big believer in the it doesn't need to be an application record-backed object to go on app models. But slowly, we've been taking stuff out. I think it'd be very common for what we talk about as query objects to just be methods in the respective application record. So the user record, as a great example, has all of these methods for doing any sort of query that you might want to do. And I'm a fan of extracting that out into this very specific place called app queries.
Commands are now another thing that I think very typically would fall into the app services place. Jobs naming that is something different. Clients we've got serializers is another one that we have at the top level, so those are four. We use Blueprinter within the app. And again, it's sort of weird. We don't really have an API. We're using Inertia. So we are still serializing to JSON across the boundary. And we found it was useful to encapsulate that. And so we have serializers as a directory, but they just do that. We do have policies. We're using Pundit for authorization, so that's another one that we have.
But yeah, I think the junk drawerness probably most goes to app models. But at this point, more and more, I feel like we have a place to put things. It's relatively clear should this be in a controller, or should this be in a query object, or should this be in a command? I think I'm finding a place of happiness that, frankly, I've been searching for for a long time. You could say my whole life I've been searching for this contented state of I think I know where stuff goes in the app, mostly, most of the time.
I'm just going to say this, and now that you've asked the excellent question of like, yeah, but no, where are you hiding some stuff? I'm going to open up models. Next week I’m going to be like, oh, I forgot about all of that nonsense. But the things that we have defined I'm very happy with.
STEPH: That feels really fair for app models. Because like you said, I agree that it doesn't need to be ActiveRecord-backed to go on app models. And so, if it needs to live somewhere, do you add a junk drawer, or do you just create app models and reuse that? And I think it makes a lot of sense to repurpose app models or to let things slide in there until you can extract them and let them live there until there's a pattern that you see.
CHRIS: We do. There's one more that I find hilarious, which is app lib, which my understanding...I remember at one point having one of those afternoons where I'm just like, I thought stuff works, but stuff doesn't seem to work. I thought lib was a directory in Rails apps. And it was like, oh no, now we autoload only under the app. So you should put lib under app. And I was just like, okay, whatever.
So we have app lib with very little in it. [laughs] But that isn't so much a junk drawer as it is stuff that's like, this doesn't feel specific to us. This goes somewhere else. This could be extracted from the app. But I just find it funny that we have an app lib. It just seems wrong.
STEPH: That feels like one of those directories that I've just accepted. Like, it's everywhere. It's like in all the apps that I work in. And so I've become very accustomed to it, and I haven't given it the same thoughtfulness that I think you have. I'm just like, yeah, it's another place to look. It's another place to go find some stuff. And then if I'm adding to it, yeah, I don't think I've been as thoughtful about it. But that makes sense that it's kind of silly that we have it, and that becomes like the junk drawer. If you're not careful with it, that's where you stick things.
CHRIS: I appreciate you're describing my point of view as thoughtfulness. I feel like I may actually be burdened with historical knowledge here because I worked on Rails apps long, long ago when lib didn't go in-app, and now it does. And I'm like, wait a minute, but like, no, no, it's fine. These are the libraries within your app. I can tell that story. So, again, thank you for saying that I was being thoughtful. I think I was just being persnickety, and get off my lawn is probably where I was at.
STEPH: Oh, full persnicketiness. Ooh, that's tough to say. [laughs]
CHRIS: But yeah, I just wanted to share that little summary, particularly the app clients is an interesting one. And again, I'll share the adapter pattern blog post because I think it's worked really well for us. And it's allowed us to slowly build up a more robust test suite. And so now our feature specs do a very good job of simulating the reality of the world while also dealing with the fact that we have these 17 external situations that we have to interact with.
And so, how do you balance that VCR versus other things? We've talked about this a bunch of times on different episodes. But app clients has worked great with the adapter pattern, so once more, rounding out our organizational approach. But yeah, that's what's up in my world. What's up in your world?
STEPH: So I have a small update to give. But before I do, you just made me think of something in regards to that article that talks about the adapter pattern. And there's also another article that's by Joël Quenneville that's testing third-party interactions. And he made me reflect on a time where I was giving the RSpec course, and we were talking about different ways to test third-party interactions. And there are a couple of different ways that are mentioned in this article. There are stub methods on adapter, stub HTTP request, stub request to fake adapter, and stub HTTP request to fake service. All that sounds like a lot. But if you read through the article, then it gives an example of each one.
But I've found it really helpful that if you're in a space that you still don't feel great about testing third-party interactions and you're not sure which approach to take; if you work with one API and apply all four different strategies, it really helps cement how to work through that process and the different benefits of each approach and the trade-offs. And we did that during the RSpec course, and I found it really helpful just from the teacher perspective to go through each one. And there were some great questions and discussions that came out of it.
So I wanted to put that plug out there in the world that if you're struggling testing third-party interactions, we'll include a link to this article. But I think that's a really solid way to build a great foundation of, like, I know how to test a third-party app. Let me choose which strategy that I want to use.
Circling back to what's going on in my world, I am still working on migrating unit tests over to RSpec. It's a thing. It's part of the work that I do. [laughs] I can't say it's particularly enjoyable, but it will have a positive payoff. And along that journey, I've learned some things or rediscovered some things. One of them is read expectations very carefully.
So when I was migrating a test over to RSpec, I read it as where we expected a record to exist. The test was actually testing that a record did not exist. And so I probably spent an hour understanding, going through the code being like, why isn't this record getting created like I expect it to? And I finally went back, and I took a break, and I went back. And I was like, oh, crap, I read the expectation wrong. So read expectations very carefully.
The other one...this one's not learned, but it is reinforced. Mystery Guests are awful. So as I've been porting over the behavior over to RSpec, the other tests are using fixtures, and I'm moving that over to use factory_bot instead. And at first, I was trying to be minimal with the data that I was bringing over. That failed pretty spectacularly. So I have learned now that I have to go and copy everything that's in the fixtures, and then I move it over to factory_bot. And it's painful, but at least then I'm doing that thing that we talked about before where I'm trying to load as little context as possible for each test.
But then once I do have a green test, I'm going back through it, and I'm like, okay, we probably don't care about when you were created. We probably don't care when you're updated because every field is set for every single record. So I am going back and then playing a game of if I remove this line, does the test still pass? And if I remove that line, does the test still pass? And so far, that has been painful, but it does have the benefit of then I'm removing some of the setup. So Mystery Guests are very painful.
I've also discovered that custom error messages can be tricky because I brought over some tests. And some of these, I'm realizing, are more user error than anything else. Anywho, I added one of the custom error messages that you can add, and I added it over to RSpec. But I had written an incorrect, invalid statement in RSpec where I was looking...I was expecting for a record to exist. But I was using the find by instead of where. So you can call exist on the ActiveRecord relation but not on the actual record that gets returned.
But I had the custom error message that was popping up and saying, "Oh, your record wasn't found," and I just kept getting that. So I was then diving through the code to understand again why my record wasn't found. And once I removed that custom error message, I realized that it was actually because of how I'd written the RSpec assertion that was wrong because then RSpec gave me a wonderful message that was like, hey, you're trying to call exist on this record, and you can't do that. Instead, you need to call it on a relation.
So I've also learned don't bring over custom error messages until you have a green test, and even then, consider if it's helpful because, frankly, the custom error message wasn't that helpful. It was very similar to what RSpec was going to tell us in general. So there was really no need to add that custom step to it.
For the final bit that I've learned or the hurdle that I've been facing is that migrating tests descriptions are hard unless they map over. So RSpec has the context and then a description for it that goes with the test. Test::Unit has methods like method names instead. So it may be something like test redemption codes, and then it runs through the code.
Well, as I'm trying to migrate that knowledge over to RSpec, it's not clear to me what we're testing. Okay, we're testing redemption codes. What about them? Should they pass? Should they fail? Should they change? What are they redeeming? There's very little context. So a lot of my tests are copying that method name, so I know which tests I'm focused on, and I'm bringing over.
And then in the description, it's like, Steph needs help adding a test description, and then I'm pushing that up and then going to the team for help. So they can help me look through to understand, like, what is it that this test is doing? What's important about this domain? What sort of terminology should I include? And that has been working, but I didn't see that coming as part of this whole migrating stuff over. I really thought this might be a little bit more of a copypasta job. And I have learned some trickery is afoot. And it's been more complicated than I thought it was going to be.
CHRIS: Well, at a minimum, I can say thank you for sharing all of your hard-learned lessons throughout this process. This does sound arduous, but hopefully, at the end of it, there will be a lot of value and a cleaned-up test suite and all of those sorts of things. But yeah, it's been an adventure you've been on. So on behalf of the people who you are sharing all of these things with, thank you.
STEPH: Well, thank you. Yeah, I'm hoping this is very niche knowledge that there aren't many people in the world that are doing this exact work that this happens to be what this team needs. So yeah, it's been an adventure. I've certainly learned some things from it, and I still have more to go. So not there yet, but I'm also excited for when we can actually then delete this portion of the build process.
And then also, I think, get rid of fixtures because I didn't think about that from the beginning either. But now that I'm realizing that's how those tests are working, I suspect we'll be able to delete those. And that'll be really nice because now we also have another single source of truth in factory_bot as to how valid records are being built.
Mid-Roll Ad:
Flaky tests take the joy out of programming. You push up some code, wait for the tests to run, and the build fails because of a test that has nothing to do with your change. So you click rebuild, and you wait. Again. And you hope you're lucky enough to get a passing build this time.
Flaky tests slow everyone down, break your flow and make things downright miserable. In a perfect world, tests would only break if there's a legitimate problem that would impact production. They'd fail immediately and consistently, not intermittently. But the world's not perfect, and flaky tests will happen, and you don't have time to fix them all today. So how do you know where to start?
BuildPulse automatically detects and tracks your team's flaky tests. Better still, it pinpoints the ones that are disrupting your team the most. With this list of top offenders, you'll know exactly where to focus your effort for maximum impact on making your builds more stable. In fact, the team at Codecademy was able to identify their flakiest tests with BuildPulse in just a few days. By focusing on those tests first, they reduced their flaky builds by more than 68% in less than a month!
And you can do the same because BuildPulse integrates with the tools you're already using. It supports all the major CI systems, including CircleCI, GitHub Actions, Jenkins, and others. And it analyzes test results for all popular test frameworks and programming languages, like RSpec, Jest, Go, pytest, PHPUnit, and more.
So stop letting flaky tests slow you down. Start your 14-day free trial of BuildPulse today. To learn more, visit buildpulse.io/ bikeshed. That's buildpulse.io/bikeshed.
Pivoting just a bit, there's a listener question that I'm really excited for us to dive into. And this listener question comes from Joël Quenneville. Hey, Joël. All right, so Joël writes in, "As senior-level developers, how do you set goals to ensure that you keep growing? How do you know what are high-value areas for you to improve? How do you stay sharp? Do you just keep adding new languages to your tool belt? Do you pull back and try to dig into more theoretical concepts? Do you feel like you have enough tech skills and pivot to other things like communication or management skills?
At the start of a dev career, there's an overwhelming list of things that it feels like you need to know all at once. Eventually, there comes a point where you no longer feel like you're drowning under the list of things that you need to learn. You're at least moderately competent in all the core concepts. So what's next?" This is a big, fun, scary question. I really like this question. Thank you, Joël, for sending it in because I think there's so much here that can be discussed.
I can kick us off with a few thoughts. I want to first highlight one of the things that...or one of the things that resonates with me from this question is how Joël describes going through and reaching senior status how it can really feel like working through a backlog of features. So as a developer, I want to understand this particular framework, or as a developer, I want to be able to write clear and fast tests, or as a developer, I want to contribute to an open-source project. But now that that backlog is empty, you're wondering what's next on your roadmap, which is where I think that sort of big, fun scariness comes into play.
So the first idea is to take a moment and embrace that success. You have probably worked really hard to get where you're at in your career. And there's nothing wrong with taking a pause and enjoying the view and just being appreciative of the fact that you are able to get your work done quickly or that you feel very confident in the work that you're doing. Growth is often very important to our careers, but I also think it's important to recognize when you've achieved certain growth and then, if you want to, just enjoy that and pause. And you're not constantly pushing yourself to the next level. I think that is a totally reasonable and healthy thing to do.
The second thing that comes to mind is that you're on a Choose Your Own Adventure mode now, so you get to...I would encourage folks, once you've reached this stage, to reflect on where you're at and consider what is your dream? What are your aspirations? Maybe they're related to tech; maybe they're not. And consider where is it that you want to go next? And then, what are the concrete steps that will help you achieve those goals?
So there's a really great article by Jen Dary, who's a career coach and owner of Plucky Manager Training, that describes this process. And there's a really great blog post that I'll be sure to include a link to in the show notes. But she has a couple of great questions that will then help you identify, like, what are my goals? Some of those questions are, "If I could do anything and money wasn't an object, I'd spend my time doing dot dot dot." And that doesn't necessarily mean sitting on a beach with your toes in the sand all day. I mean, it could, but then that probably just means you need a vacation. So take the vacation.
And then, once you start to get bored, where does your mind start to wander? What are the things that you want to do? Where are you interested in spending your time? And then, once you have an idea of how you'd like to spend your time, you can consider what actions you could take next that will point you in that direction.
There's also the benefit that by this point, you probably have an idea of the type of things that you like to do and where you like to spend your time. And so you can figure out which areas of expertise you want to invest in. So do you like more greenfield projects? Do you like architecture discussions? Do you like giving talks? Do you like teaching? Or maybe you're interested in management. I think there's also a more concrete approach that you can take that.
You can just talk to your managers in your team and say, "Hey, what big, hard problems are you looking to solve? And then, you can get some inspiration from them and see if their problems align with your interests. Maybe it's not even your own team, but you can talk to other companies and see what other problems industries are trying to solve. That might be an area that then spurs some curiosity or some interest.
And then, where do you feel underutilized? So with your current day-to-day, are there areas where you feel that you wish you had more responsibilities or more opportunities, but you feel like you don't have access to those opportunities? Maybe that's an area to explore as well.
This feels like a wonderful coaching question in terms of you have done it; congratulations. You've reached a really great spot in your career. And so now you're figuring out that big next step. This is going to be highly customized to each person to then figuring out what it is that's going to help you feel fulfilled over the next five years, ten years, however long you want to project out. Those are some of my thoughts. How about you? What do you think?
CHRIS: Well, first, those are some great thoughts. I appreciate that I get to follow them now. It's going to be a hard act to follow. But yeah, I think Joël has asked a fantastic question. And coming from Joël, I know how intentional and thoughtful a learner, and sharer, and teacher and all of these things are. So it's all the more sort of framed in that for me knowing Joël personally.
I think to start, the kernel of the question is as senior developers, that's the like...or senior level developers is the way Joël phrased it, but it treats it as sort of this discrete moment in time, which I think there's maybe even something to unpack. And I think we've probably talked about this in previous episodes, but like, what does that even mean?
And I think part of the story here is going from reactive where it's like, I don't know how anything works. I know a little bit. I can code some. And every day, I'm presented with new problems that I just don't understand. And I'm trying to build up that base of knowledge. Slowly, you know, you start, and it's like 95% of the time you feel like that. And slowly, the dial switches over, and maybe it's only 25% of the time you feel like that.
Somewhere along that spectrum is the line of senior developer. I don't actually know where it is, but it's somewhere in there. And so I think it's that space where you can move from reactive learning things as necessitated by the work that's coming at you to I want to proactively choose the things that I want to be learning to try and expand the stuff that I know, and the ways that I can think about the work without being in direct response to a piece of work coming at me.
So with that in mind, what do you do with this proactive space? And I think the way Joël frames the question, again, to what I know of Joël, he's such an intentional person. And I wouldn't be surprised if Joël is very purposeful and thinks about this and has approached it as a specific thing that he's doing. I have certainly been in more of “I'll figure it out when I get there.” I'll explore.
Or actually, probably the most pointed thing that I did was I joined a consultancy. And that was a very purposeful choice early on in my career because I'm like, I think I know a little bit. I don't think I know a lot. I would like to know a lot. That seems fun. So what do I think is the best way to do that? My guess, and it turned out to be very much true, is if I join a consultancy, I'm going to see a bunch of different projects, different types of technologies, organizations, communication structure, stuff that works, stuff that doesn't work.
And to be honest, I actually thought I would try out the consultancy thing for a little while, like a year or two, and then go on to my next adventure. Spoiler alert: I stayed for seven years. It was one of the best periods of my professional life. And I found it to be a much deeper well than I expected it to be. But for anyone that's looking for, like, how can I structure my career in a way that will just automatically provide the sort of novelty and space to grow? I would highly recommend a consultancy like thoughtbot. I wonder if they're hiring.
STEPH: Well, yes, we are hiring. That was a perfect plug that I wasn't expecting for that to come. But yes, thoughtbot is totally hiring. We'll include a link in the show notes to all the jobs. [laughs]
CHRIS: Sounds fantastic. But very sincerely, that was the best choice that I could have made and was a way to flip the situation around such that I don't have to be thinking about what I want to be learning. The learning will come to me. But even within that, I still tried to be intentional from time to time. And I would say, again, I don't have a holistic theory of how to improve. I just have some stuff that's kind of worked out well. One thing is focusing on fundamentals wherever I can, or a different way to put it is giving myself permission to spend a little bit more time whenever my work brushes up against what I would consider fundamentals.
So things that are in that space are like SQL. Every time I'm working on something, I'm like, ah, I could use like a CROSS JOIN here, but I don't know what that is. Maybe I'll spend an extra 30 minutes Googling around and trying to figure out what a CROSS JOIN is. Is that a thing? Is a CROSS JOIN a real thing? I may be making it up. [laughs] A window function, I know that those are real. Maybe I'll learn what a window function is. I think a CROSS JOIN is a real thing. A LEFT OUTER JOIN that's a cool thing.
And so, each time I've had that, SQL has been something that expanding my knowledge; I've continually felt like that was a good investment. Or fundamentals of HTTP, that's another one that really has served me well. With Ruby being the primary language that I program in, deeply understanding the language and the fundamentals and the semantics of it that's another place that has been a good investment. But by contrast, I would say I probably haven't gone as deep on the frameworks that I work with. So Rails is maybe a little bit different just because, like many people, I came to Ruby through Rails, and I've learned a lot of Rails.
But like in JavaScript, I've worked with many different JavaScript frameworks. And I have been a little more intentional with how much time I invest into furthering my skills in them because I've seen them change and evolve enough times. And if anything, I'm trying to look for ones that are like, what if it's less about the framework and more about JavaScript and web fundamentals underneath? Thus, I've found myself in Svelte land. But I think it's that choice of trying to anchor to fundamentals wherever possible.
And then I would say the other thing that's been really beneficial for me is what can I do that's wildly outside the stuff that I already know? And so probably the most pointed example I have of this is learning Elm. So I previously spent most of my career working in Ruby and JavaScript, so primarily object-oriented languages without a strong type system. And then, I was able to go over and experience this whole different paradigm way of working, way of structuring programs, feedback loop. There was so much about it that was really, really interesting.
And even though I don't get to work in Elm, frankly, as much as I would want at this point or really at all, it informs everything that I do moving forward. And I think that falls out of the fact that it was so different than what I was doing. So if I were to do that again, probably the next type of language I would learn is Lisp because those are like, well, that's a whole other category of thing that I've heard about. People say some fun stuff about them, but I don't really know. So it's that fundamentals and weird stuff is how I would describe it. And by weird, I mean outside of the core base of knowledge that I have.
STEPH: I love that categorization, and I'll stick with it, fundamentals and weird stuff, to stretch and grow and find some other areas. I also really like your framing, the reactive versus proactive. I think that's a really nice way to put it because so much of your career is you are just learning what your company needs you to learn, or you're learning what you need to keep progressing and to feel more competent with the types of features or the work that you're handling.
And I think that's why Jen Dary's blog post resonates with me so much because it's probably...up until now, a lot of someone's career, maybe not Joël's particularly, but I know probably for my career, a lot of it has been reactive in terms of what are the things that I need to learn? And so then once you reach that point of like, okay, I feel competent and reasonably good at all the things that I needed to know, where do I want to go next?
And rather than focus on necessarily the plans that are laid out in front of me, I can then go wide and think about what are some of the bigger things that I want to tackle? What are the things that are meaningful to me? Because then I can now push forward to this bigger goal versus achieving a particular salary band or title or things like that. But I can focus on something else that I really want to contribute to.
And there do seem to be two common paths. So once you reach that level, either you typically go into management, or you become that more like principal and then onward and upward, whatever is after principal. I don't even know what's after that, [laughs] but the titles that come after principal. So there's management, and then I've seen the other very strong contributors, so Aaron Patterson comes to mind. And I feel like those people then typically will migrate to places where they get to contribute to a language or to a framework.
And I think it comes down to the idea of impact because both of those provide a greater impact. So if you go into management, you can influence and affect a team of individuals, and you can increase the value created by that team. Then you've likely exceeded the value that you would have created as your own individual contributor.
Or, if you contribute to a language or a framework, then your technical decisions impact a larger community. So I think that would be another good thing to reflect on is what type of impact am I looking for? What type of communities do I want to have a positive impact on? And that may spur some inspiration around where you want to go next, the things that you want to focus on.
CHRIS: Yeah, I think one of the things we're picking up in that that Joël mentioned in his question is the idea of there is the individual contributor path. But then there's also the management path, which is the typical sort of that's the progression. And I think, for one, naming that the individual contributor path and the idea of going to principal dev and those sorts of outcomes is a fantastic path in and of itself.
I think often it's like, well, you know, you go along for a certain amount of time, and then you become a manager. It's like, those are actually distinctly different things. And people have different levels of interest and aptitude in them, and I think recognizing that is critically important. And so, not expecting that management just comes after individual contributor is an important thing.
The other thing I'll say is Charity Majors, who we had on the podcast a bit back, has a wonderful blog post about the pendulum swing called The Engineer/Manager Pendulum. And so in it, she talks about folks that have taken an exploration over into manager land and then come back to the individual contributor path or vice versa sort of being able to move between them, treating them as two potentially parallel career tracks but ones that we can move between.
And her assertion is that often folks that are particularly strong have spent some time in both camps because then you gain this empathy, this understanding of what's the whole picture? How are we doing all of this? How do we think about communication, et cetera? So, again, to name it, like, I think it's totally fine to stay on one of those tracks to really know which of those tracks speaks to you or to even move between them a little bit and to explore it and to find out what's true.
So we'll include a link to that in the show notes. And we'll also include a link to the previous episode a while back when we had Charity on. But yeah, those I think are some critical thoughts as well because those are different areas that we can grow as developers and expand on our impact within the team. And so, we want to make sure we have those options on the table as well.
STEPH: Absolutely. I love where teams will support individuals to feel comfortable shifting between experiences like that because it does make you a more well-rounded contributor to that product team, not just as an engineer, but then you will also understand what everybody else is working on and be able to have more meaningful conversations with them about the company goals and then the type of work that's being done. So yeah, I love it.
If you're in a place that you can maybe fail a little bit, hopefully not in a too painful way, but you can take a risk and say, "Hey, I want to try something and see if I like it," then I think that's wonderful. And take the risk and see how it goes. And just know that you have an exit strategy should you decide that you don't like that work or that type of work isn't for you. But at least now you know a little bit more about yourself.
Overall, I want to respond directly to something that Joël highlighted around how do you know what are high-value areas for you to improve? And I think there are two definitions there because you can either let the people around you and your team define that high value for you, and maybe that really resonates with you, and it's something that you enjoy. And so you can go to your manager and say, "Hey, what are some high-value areas where I can make an impact for the team?"
Or it could be very personal. And what are the high-value areas for you? Maybe there's a particular industry that you want to work on. Maybe you want to hit the public speaking circuit. And so you define more intrinsically what are those high-value areas for you? And I think that's a good place to start collecting feedback and start looking at what's high value for you personally and then what's high value to the company and see if there's any overlap there.
With that, I think we've covered a good variety of things to explore and then highlighted some of the different ways that you and I have also considered this question. I think it's a fabulous question. Also, I think it's one, even if you're not at that senior level, to ask this question. Like, go ahead and start asking it early and often and revisit your answers and see how they change. I think that would be a really powerful habit to establish early in your career and then could help guide you along, and then you can reflect on some of your earlier choices. So, Joël, thank you so much for that question.
STEPH: On that note, shall we wrap up?
CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at hosts@bikeshed.fm via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.Sponsored By:BuildPulse: Flaky tests take the joy out of programming. You push up some code, wait for the tests to run, and the build fails because of a test that has nothing to do with your change. So you click rebuild and you wait. Again. And you hope you're lucky enough to get a passing build this time.
Flaky tests slow everyone down, break your flow and make things downright miserable.
In a perfect world, tests would only break if there's a legitimate problem that would impact production. They'd fail immediately and consistently, not intermittently. But the world's not perfect, and flaky tests will happen, and you don't have time to fix them all today. So how do you know where to start?
BuildPulse automatically detects and tracks your team's flaky tests. Better still, it pinpoints the ones that are disrupting your team the most. With this list of top offenders, you'll know exactly where to focus your effort for maximum impact on making your builds more stable. In fact, the team at Codecademy was able to identify their flakiest tests with BuildPulse in just a few days. By focusing on those tests first, they reduced their flaky builds by more than 68% in less than a month!
And you can do the same because BuildPulse integrates with the tools you're already using. It supports all the major CI systems, including CircleCI, GitHub Actions, Jenkins, and others. And it analyzes test results for all popular test frameworks and programming languages, like RSpec, Jest, Go, pytest, PHPUnit, and more.
So stop letting flaky tests slow you down. Start your 14-day free trial of BuildPulse today. To learn more, visit buildpulse.io/bikeshed. That's buildpulse.io/bikeshed.Support The Bike Shed

May 31, 2022 • 51min
340: Solving People Problems with Rob Whittaker
Steph is joined by a very special guest and fellow thoughtbotter, Rob Whittaker.
Rob shares how he became the Software Development Director for Launchpad II, thoughtbot's Europe, Middle East, and Africa team. They also dive into what it's like to be a Development Director, the differences between mentoring and coaching, working with GitHub Codespaces, and strategies for boosting your creativity and problem solving capabilities.
thoughtbot is hiring!
ngrok
Time Off Book
Rob's Codespace Setup
Rob Whittaker on Twitter
Become a Sponsor of The Bike Shed!
Transcript:
STEPH: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari. And today, I'm joined by a very special guest and fellow thoughtboter, Rob Whittaker.
Rob has been in the software business for the past 15 years and spent the last five and a half years at thoughtbot. Rob is the Director of Software Development for our Europe, Middle East, and Africa team and, in his spare time, likes to hunt down delicious beers and coffee. Rob, welcome to The Bike Shed. It's so lovely to have you on the show today.
ROB: Thank you for having me. It's a pleasure to be here. Yeah, thank you for that lovely introduction and my far too complicated job title. It sounds more serious than it actually is.
STEPH: Well, you do have a fancy job title, yeah, Director of Software Development. [laughs]
ROB: Yeah, it's the added on bit where it's Europe, Middle East, and Africa where I feel like there's about 20 of us maximum. But that sounds more grandiose than it actually is.
STEPH: Yeah, that's something that Chris and I haven't dug into too much on previous episodes are all the different teams that we have at thoughtbot. So the shorter way of saying that is Launchpad II, but not everybody knows that. But I'm going to circle back to that because I would love to talk a bit more about that specific team and the dynamic. But before we do that, I'm realizing I'm not familiar with your origin story as to how you came to thoughtbot and then how you became this very fancy grand title of Director of Software Development for Europe, Middle East, and Africa team.
ROB: Yeah, there's a bit of history about thoughtbot London as well that kind of ties into this. So before thoughtbot Launchpad II, it was thoughtbot London before we went remote. And initially, we had the plan of setting up a new studio in London to help expand thoughtbot outside of the Americas, but that plan fell through. But he knew some people from another agency called New Bamboo, and so we merged with or acquired that agency, and that agency then became the thoughtbot London team. I'm actually the first hire or...not the first hire, that's not true, the first development hire for the thoughtbot London team that would then become launchpad II.
I was at the Bath Ruby Conference six years ago, I guess. And there was just an advert up on the hiring board that Nick Charlton, who's a Senior Developer and Development Team Lead at Launchpad II now, had put up. And I saw it, and I was talking to somebody who was my mentor at the time that I'd worked with at a previous job at onthebeach.co.uk, a guy called Matt Valentine-House who now works at Shopify who, actually, fun fact, his face appears at the top of Ruby Weekly this week. If you open up this week's Ruby Weekly, you can see Matt Valentine-House, who said to me, "Yeah, apply for it, why not? You see what happens." And I was like, "Okay," and just kind of took the leap.
So I thought, thoughtbot, why would thoughtbot want me? Which is something I think a lot of people think when they want to join thoughtbot. They think, well, I can't do that. But I would implore people to apply. And so, from there, I never really wanted to move to London. I'd always lived in the North West of the UK. I made that leap to London because I wanted to work at thoughtbot. And then, gradually, over time, the London team expanded, and we needed to split out the management roles, and the development director role came up.
And I've always enjoyed the coaching side of software development. It seems that you gain more experience as you help people with less experience, and I've always enjoyed coaching. And that was a big part of the role for me. So I was fortunate enough to be allowed to do it. And then, from there, things have grown. Yeah, so it's been a really interesting journey as a development director.
The London studio went through a pretty tough time at one point where not long after I became development director that two-thirds of the team, in the space of two weeks, decided to hand their notice in and unbeknownst to each other. And so, all of a sudden, we didn't have a very big team. We didn't have very many prospects, and so it was a tough time. And so it's really nice to look back on the last three years and go, okay, we came through that. We're now one of the stronger teams at thoughtbot.
And somebody actually asked me in an interview the other day, somebody we actually hired, not just based on this question, but he said, "What is your proudest moment of working at thoughtbot?" And I was like, that's one of the best questions I've heard from a candidate. And I said, "Hmm, that's interesting." It's not anything development-related, but it's that I can now look back on this team and say this is the team that I have grown in my image and all these people apart from Nick, who was the person who put the advert of it at Bath Ruby.
I've hired all these people, and so the buck stops with me really because if anybody isn't able to perform, then it's kind of my fault because these are the people that I want to grow into being the team and see be a successful product design team or product development team, which brings us to modern-day I guess. So yeah, that was a long origin story. That's pretty much my whole thoughtbot biography. And I apologize.
STEPH: That was perfect. I thoroughly enjoyed hearing it. And yeah, that's an awesome question. What's your proudest moment, like, part of a team? That can yield so many insights. I love that question. And I love your answer as well in terms of this is the team. We've pulled through a hard time. And then we've built everybody to the point that they are now, which kind of leads in perfectly to my next question.
So being the software development director, could you walk us through a little bit of like, that's one of those titles I feel like a lot of companies have, but they can be very different from company to company. Would you mind walking us through a bit of the day-to-day in the life of being a development director?
ROB: Yeah, sure. It's one of those things where I think this is something that I'm not sure if it's unique to thoughtbot, but you end up taking on a lot of hats at thoughtbot. So I know you're a team lead. So you have to balance your responsibilities as an individual contributor, which is a term I don't like, but I haven't got a better way to say it yet, and your development team lead roles. And I have similar sort of responsibilities where I have to do my individual contributor work. I have to do my director work. I'm also on our DEI Council. So I have to add that work in too, and make sure it's balanced out.
So the start of my day is very much about prioritizing things. I know you and Chris, a few episodes ago, had quite lengthy discussions about productivity systems and what tools Chris wants to use. And I'm a big fan of Things, and I've been using it for maybe ten years, if not more, that I've now got my system down that I'm able to prioritize things in the way that I can pick up the right task at the right time. So a big part of my day-to-day is figuring out what is the most important thing to work on? So I have my client work, and then it's about supporting the team from that point.
And the big part of my idea of what a manager is is that my job isn't to tell you what to do; my job is to find out what you want to do and direct you in a place where you can find the answer. Or I can give you some guidance about where to find the answer. And I feel like I'm doing a bad job as a manager where if I have to act as a middle person. Because if somebody comes to me and says, "Oh, I want to do this thing," And I say, "Well, I'll talk to that person for you," and then come back, I have failed.
And my job is to say, "Oh, you should talk to that person about this." And to some extent, it's about being lazy. I don't want to be doing too much stuff because I have other things to do. But I want to make sure that those people have the right frameworks and guidelines in place so that I can point them in the right direction.
STEPH: I think the fancy term for that is just delegating. [laughs]
ROB: Yes, thank you. [laughs]
STEPH: But I like lazy. [laughter] I like that one as well. I love that framing of a manager where you're not telling someone to do, but as your job, you are helping that person figure out what they want to do and then supporting them. I've been chatting with Chris recently and some others because I've been reading the book Resilient Management by Laura Hogan. And it's really helped me cement the difference between mentorship, coaching, and sponsorship.
And I realized that I'm already falling a lot into the coaching and sponsorship because mentorship can be wonderful, but it is more directive of like, this is what I've done. And this is what has worked for me, and you should do this too. Versus the coaching and sponsorship, I think aligns far more perfectly with what you described as management, where it is my job to figure out what brings you joy, what brings you energy, and then how to help you progress to your next goals and your next steps in your career.
ROB: Yeah, I think Laura Hogan is a great resource like her blog posts and books. I haven't read Resilient Management. But I know that the team leads on my team had been on her training courses, and they say how great it is. And there's also a blog post of hers that's about managing in tough times. It has a much better title than that. But it's about how do we be good managers in such uncertain times when there are a lot of things going on around the world right now that we all have to deal with? And helping people deal with those situations.
Because at the end of the day, work isn't the most important thing; the most important thing is living. And it's something I say to my team, especially when people feel like...it's something that I say to my team when they're not feeling well. The most important thing is that you get better. And thoughtbot is still going to be here. The most important thing is how you live your life and how you look after yourself, and everything else is secondary.
STEPH: Absolutely. Well, and everybody needs something different from work too. Some people may be in a state where they really need more stability and predictability from their work. And some people may be in a space where everything else outside of work is very stable and calm, and then they want work to bring the challenge and the volatility and the variety to life.
So I remind myself very often that not everybody wants the same thing from work and to figure out what it is that someone wants from work. And then your seasons change. You may be in a season of where you want stability, or then you may be in a season of like, I'm ready to grow and push and take some risks. So helping someone identify which season of work they're in.
ROB: Yeah, I 100% agree. What people can't see is me nodding vigorously on the other side of this call. It's very much about understanding because everybody is different. And that's what we want from a good team; it's understanding everybody's different approach to things. And so sometimes people want the distraction of work because they don't want the time off to think about other things. They want to be able to sit and concentrate on something. And it's understanding different people.
STEPH: Yeah, that's a great point. I'm curious; you mentioned that as part of being development director, you are also, in addition to managing the team and being part of DEI then, there's also your day-to-day client work. I think you've started a new client recently. Could you tell me more about that?
ROB: Yeah, I'd recently been working for a client for two and a half years, which is a very long time to be working with one client at thoughtbot. And it came to the time where I was ready for a new challenge, and it was stable enough for me to move on. So I've been working for a company in the UK. They allow customers to buy and sell cars, not between customers, the customers like companies like Auto Trader but customers to dealers and back and forth. And primarily, they worked with buying cars. And they've launched a product in the UK where people can sell their cars as well because they found that 70% of people who are buying cars also want to sell their cars.
And from there, they're now looking to expand into Germany and Spain, so we are helping them to do that. And it's an interesting project, not necessarily from a technical point of view, but I might come back to that but definitely from a cultural point of view. The product at the moment allows you to put in a license plate or a registration plate for a car. And there's then a service in the UK that will allow you to pull up the maker model and the service history of that car. But you can't do that in Germany because it's against the privacy laws to find something from registration plates.
And so it's interesting these different cultural aspects that you have to take into account when expanding into other countries that you aren't from and that you have less knowledge about. Because I'm also aware that credit cards aren't a big thing in Germany either. So you have to think about how they pay for things in different countries. And the previous company I was working for they're based in the Middle East. And so we had to take into account how we would do right to left design in a mobile app, which is really interesting from a western point of view that you get so used to swiping through an experience from left to right.
But then it's not just the screen that's right to left. The journey moves from right to left. So you have to get used to the transitions of the screen going the other way and not thinking of that as going backwards. It's one of the best things about working in this region is that we get to deal with so many different cultures and how they expect to use applications. It's really satisfying.
STEPH: That's fascinating. Yeah, I haven't gotten to work on a project like that that has those types of considerations. I think the most relatable experience I have is more working in healthcare because that's one of those areas that I'm certainly not proficient. I've become more proficient because of the type of projects that I've worked on.
But I'm curious, for expanding into other regions and cultures, do those teams typically have an expert on their team that then helps guide the development process? Or, as you mentioned, the process of buying a car could be very different in some of the legal aspects that you're up against. Is there someone that you can turn to that's then helping mentor or be aware of that process?
ROB: Yes, the current client they have a team based in Germany, people who are from Germany that are advising us on different cultural aspects or legislative things. They are doing a lot of data analysis for us because we need a new service that we can use for looking up car details. Because there is a service that you give different information to to get information about the car back from. So yeah, we do have that team there. But that's not always the case because every client is different.
The company that we're working for in the Middle East didn't have a team. They had two developers who were helping us. But we have to figure things out just from their cultural background to ask them questions about things and allow them to advise us, but nobody who was really a specialist. But that's an interesting thing as well, not just the cultural aspects of the customers but the cultural aspects of the company that you work for.
We definitely found that the company in the Middle East was more hierarchical. And so that's another challenge that you have to work with because we tend to work in quite a flat way where we tend to default as on thoughtbot projects, of not having a point person on a project. Everybody is there to answer the questions. But some teams or clients want that point person. And so, we adapt and change to allow for that to happen and work in that way. But it is interesting to work in different companies as well as working as an agency.
STEPH: Yeah, you bring up a really good point of something that I don't reflect on very often, but it's something that I really appreciate about our thoughtbot culture is that we do try to strive for a very flat hierarchy. But also in working with clients, we purposely will avoid like, if there are two or more thoughtboters on a project, we don't want one person that is then the primary contact between the client and the thoughtbot team. The goal is that everybody shows up. Everybody is part of the process; everybody is part of meetings.
And we do have an advisor for projects, but otherwise, we work very hard to make sure that there's not just one person that's then responsible for communication. We want everybody to have opportunities to be part of meetings, to lead meetings, to take on initiatives versus having that one person. That is something that I really appreciate that we do.
ROB: Yeah. And it's more noticeable when you go to places where that isn't the norm, and you appreciate it more. And I think a big part of that is how much we are trusted. And we trust people to trust us, I guess.
STEPH: Yeah. And I think it fits in nicely with circling back to the management conversation is that when people have access to those opportunities, that makes my job so much easier as a team lead where then there are more opportunities to sponsor someone or to coach someone as to how they can then be the person that then takes on a project or if they want to lead a particular meeting, or if they want to help a team introduce retrospectives into their process. So it gives more opportunities for me to then coach someone into expanding their skill set in those ways.
ROB: Yeah, that's interesting to think about, allowing yourself to coach other people in that role. Because as we gain more experience and become senior developers, we naturally fall into that role of taking the lead on projects, even when we're not asked to. But then, when you gain other responsibilities in the management track, so you as a team lead and me as a team lead and a development director, it could be better for you to not take that role and allow somebody else to come into that role so you can coach them. That's been playing on my mind the last couple of days.
Josh Clayton, who's the Managing Director for one of our teams in the Americas, raised it on our pull request in our handbook where we were talking about team leads having a dedicated day to concentrate on team lead things. It's one of those things where somebody says something, and it's like, oh yeah, that really clicks. Maybe that's why we have been having certain struggles on projects where we need to rearrange things and learn from that and so we can be better on projects in the future. So that's something that really resonated with me, and it's flying around in the back of my mind at the moment.
STEPH: Yeah, that really resonates with me because while the predominant part of being a team lead at thoughtbot is having one-on-ones with folks, I find that when I have more time, a lot of the work also falls outside of that one-on-one where it's following up on conversations around hey, this person mentioned they're really interested in growing their skill. How can I help them? How can I help find opportunities?
Or I know that they're currently stretching their skill set right now. If I have some extra time, then I can check in with them. I can pair with them. I can see how things are going. So I find that while the one-on-ones are the staple thing that happens every two weeks, there's a lot of other behind-the-scenes work that's going on as well to make sure that that person is growing and feeling really fulfilled by their work.
ROB: I know we've spoken a lot about the product side and the client side of working on the new project that I'm working on. There are some interesting technical sides to it as well. The client has found that they have had some issues with Haskell and running on M1 Macs. And so, they've decided to take the leap and use GitHub Codespaces as their primary development environment, which has been interesting. I had heard about it but only in the background. I hadn't read anything about it or hadn't had any direct conversations. I just heard that there was a thing. So it's been quite interesting to play with that.
It's interesting the way the client is using it as well because they're using a Dockerized environment effectively inside Docker by using Codespaces. So you start the Codespace, which very basically is a Docker instance somewhere on GitHub's infrastructure. It's built very much for Visual Studio Code, and so you can just directly attach your Visual Studio Code session to the Codespace and go from there, but I'm a Vim user. I've started to feel like a bit of an old guard or a curmudgeon recently where I've been like; maybe I need to use Visual Studio Code. Maybe I should just unlearn my Vim key bindings and learn the Visual Studio ones.
And people say, "Oh, you could just use The Vim key bindings in VS Code." I'm like, that's cheating. I spent the time to learn the key bindings for Vim. I will take the time to learn the key bindings for Visual Studio Code and use it for the way it's intended. So it's been interesting to understand how Codespaces works, not necessarily in the way, it's intended. So you can still SSH into a Codespace session, but then you lose all the lovely setup stuff that you might have on your local machine.
So I did spend half a day porting my dotfiles which are based off thoughtbot's dotfiles, into something that Codespaces can use and made it publicly available. So if you go to github.com/purinkle/codespace, you can see what I use to set up my Codespace environment. And once that's set up, it becomes a bit easier because then you have all the things that you're used to running locally. It is very much early days for how the client is using it. And so they're really open to saying like, okay, let's find out what's not working, and let's work and figure out how to get it up and running properly.
So one of the things we do find is that Codespaces do timeout after a while. And then you might lose, like, even if I've created a tmux session, that tmux session disappears. And so I have to go in and create it again. I'm not sure what the timeouts are. I haven't had time to look into what those timeouts are yet. But that's definitely the main pain point at the moment of it being used as a development environment.
It's been interesting. It's been kicking around in the back of my head like the difference between developing locally and deploying locally. And it's something that I wanted to talk to people at thoughtbot and outside of thoughtbot as well to understand that more. Because I don't think you need everything running to develop locally, but you might need it to deploy locally. It's interesting to me to understand how different companies work on their products from that point of view.
STEPH: Yeah, I'm selfishly excited that you are using Codespaces for a client project because I have kept an eye on it, and I'm very intrigued by it. But I also haven't used it for a project. And it sounds really neat. I'm curious, have you found that it has helped them with onboarding or if you need to switch from working on one application to another? Have you found that it has helped them with some of those? I'm guessing that's the problem that they're optimizing to solve is how do we help people run everything quickly without having to set it up locally?
ROB: It's an interesting question because I don't have the comparison of trying to set up the environment as it was before. It was smoother. The main thing with access tokens because once you can set up your SSH keys and your GitHub tokens, it's just a case of running a script and letting it run. So yes, from that point of view, I can imagine if I tried to set up their previous environment, that it'd be a lot more challenging because they were using Vagrant and running things that way, which I know from experience would not be fun.
And I know that my Mac fans would just be spinning all the time. It would be like an aeroplane was trying to take off. So I'm thankful for that, that I don't have that experience anymore that my machine is going to slow down all the time. We've had on a previous client who had a Dockerized environment, but you have to have it all running on your machine. There are pros and cons to everything with these things. And it's like you said, what is the problem they're trying to solve with introducing this setup?
STEPH: Yeah, I can't decide if this is a good thing or a bad thing. But I'm also intrigued by the idea that if a team is using Codespaces, then that means everybody else is using VS Code. And you can still customize it so you can still have your own preferences. But that does set a standard, so everybody is using the same editor. There's a lot of cross-collaboration in terms of if you do run into an issue, then you can help each other out.
Versus when I join other teams, everybody's using their preferred editor, and then there you may have a day where someone's like, "Oh, I'm really stuck because my particular editor is suddenly having a problem and can't connect." And then you have less people that are able to help them if they're not using that same editor. And I can't decide if I like that or if I hate it [laughs] in terms of taking away people's ability to pick and choose their editor. But then the gains of everybody is using the same thing which is nice and would be really great for pairing too.
ROB: Yeah, that's an interesting point. I was talking to...I have a management coach. He's a PHP developer, and I'm a Rails developer. And we were talking about the homogenization of things nowadays. And is that good, or is that bad to use with stuff like RuboCop that lints everything, so it's exactly the same? Does that stifle creativity? But then, at the same time, the thing I like about Codespaces is I think we're biased coming at it from the point of view of Rails developers.
And if you look at how you can use Codespaces in the browser directly from GitHub, that's quite interesting because now you're lowering the barrier to entry to get started and saying you don't need to have an editor. You don't have to set up everything. You can just do it from your browser. A few years ago, I used to volunteer or coach at an organization called codebar. They help people who are less represented in the tech community get represented in the tech community. And we would see a lot of people coming for sessions using...I forgot what it's called. What was it called? Cloud 66 or something.
There was some remote development environment that people would come and say, "Oh, I've been using this," because they didn't know how to set up the necessary infrastructure to just get a Rails server going or things like that or didn't know how to set up Sublime or Atom or editor of choice. And it's really interesting if you remove your bias of 15 years of professional software development and go okay, if I were starting today, what would the environment look like, and how would I get started?
I'm lucky enough that I've grown up with the web and seen how web development has changed and been able to gain more knowledge as it's appeared. I don't envy anybody who has to come into the industry now and suddenly have to drink from this firehose of all these different frameworks, all these different technologies. Yeah, I started off by just right-clicking and viewing source on HTML files back in 1998 or something ridiculous like that. And CSS didn't even exist or wasn't used. And so it's a much different world than 24 years ago.
STEPH: That is something that Chris and I have mentioned on previous episodes where people are coming into software development, and as much as we love Vim and it sounds like you love Vim, our advice is don't start with Vim. Don't start there. You've got so much to learn. Start with something like VS Code that's going to help you out.
And you make such a great point in regards to this lowering the barrier to entry. Because I have been part of a number of classes where you have people coming in with Macs or with a Windows machine, and then you're trying to get everybody set up. You want them to use the same browser for testing. And we spend like a whole class just getting everybody on the same page and making sure their machines are working or then troubleshooting if something's not.
But if they can just go to GitHub and then they can run things seamlessly there, that's a total game-changer in terms of how I would teach a class, and it would just be far easier. So I hadn't even considered the benefits that would have for teachers or just for onboarding teams as well. But yeah, specifically for leading a class, I think that is a huge benefit.
GitHub did some pretty cool stuff around when they were launching that as well because I went back and watched some of their GitHub Universe sessions that they had where they were talking about Codespaces. And one of the things that they did that I really appreciated was how they went about launching Codespaces. So initially, it was how fast can this be? Or what's our proof of concept? And I think when they were building this, they found it took about 45 minutes if they wanted to spin up an application and then provide you a development environment. And they're like, okay, cool, like, we can do this, but it's 45 minutes, and that's not going to work.
And so then their next iteration, they got it down to 25 minutes, and then they got it down to 5 minutes. And now they've got it to the point that it's instantaneous because they're building stuff in the background overnight. And so then that way, when you click on it, it's just all ready for you. But I loved that cycle, that process that they went through of can we even do this? And then let's see, slowly, incrementally, how fast can we get it?
And then, to get feedback, instead of transitioning their own internal teams to it right away, they created this more public club. I think they called it The Computer Club, something like that. And they're like, hey, if you want to be part of Codespaces or try out this new feature that we have, delete all the source and the things that you need locally, and then just commit to using Codespaces. And then, if you are stuck or if you have trouble, then your job is to let us know so then we can iterate, and we can fix it. I really liked that approach that they took to launching this product and then getting feedback from everyone and then improving upon it.
ROB: Yeah, that sounds like an Agile developer's dream where you just put something out there that's the bare bones, and you're given license to learn from that experience and how people are actually using that tool. That's something we've actually tried to do on the client project at the moment is adding all the...now that there's a different flow in Germany, there are different questions we need to ask. And so that could be quite a complex thing to put into place.
So what we said is what we're going to do is just put in the different screens, and all you have is one option to click. So you click that option, you go to the next one, go to the next one, go to the next one. Then we have something that the customer can click on and play with and understand, and then we can iterate on top of that. But it also allows us to identify areas of risk because you can go; oh, where does this information come from? But now we need to get this from a third-party service.
So that's the riskiest thing we've got to work on here, where this other thing is just a hard-coded list of three-door or five-door cars. And so that's an easier problem to solve. So allowing yourself to put something that could be quite complex like GitHub Codespaces and go okay, we're going to put something out there. It takes 45 minutes to run-up. But we're telling you it takes 45 minutes to run it. We're not happy with it, but we want to learn how you're using it so that we can then improve it but improve it in the right direction.
Because it might be that we get it to 20 minutes to start up, but you need it in half a second. That's a ridiculous example. Or it might be that you need to be able to use RubyMine with it instead of VS Code, and that's where the market isn't. That's the thing that you can't learn in isolation that you have to put something out there for people to use and play with.
STEPH: There's one other cool feature I want to highlight that I realized that they offer as well. So in the past, I've used a tool called ngrok, which then you can make your localhost public so other people can access. You can literally demo what you're working on locally, and someone else can access it. And I think that it's very cool. It's come in handy a number of times. And my understanding is that Codespaces has that feature where they can make your localhost accessible. So your work in progress you can then share with someone, and I love that.
ROB: Oh, that's really interesting. I didn't know you could do that. I know you could forward ports from your local machine to that. But I didn't know you could share it externally. That'd be really cool. I can see how that can be really helpful in demos and pairing. And it makes sense because it's not running on your computer. It's running on some remote architecture somewhere. That's interesting.
STEPH: Well, that's the dream I've been sold from what I've been reading about GitHub Codespaces. So if I'm telling lies, you let me know [laughs] as you're working further in it than I am. But yeah, that was one of the features that I read, and I was like, yeah, that's great because I love ngrok for that purpose. And it would be really cool if that's already built into Codespaces as well.
ROB: ngrok is really interesting with things like trying to get third-party services to work. So from, the previous client, they wanted an Alexa Skill. And so, if you're trying to work with an Alexa Skill, you have to sign in from Amazon's architecture onto your local machine. You have to use ngrok as the tool there. So I wonder if that could potentially solve a problem where if there are three developers trying to develop on this if you could point to one Codespace that you're all working on rather than...
Because the problem we had was if me or Fritz or Rakesh was working on this, we'd have to go and then change the settings on the Amazon Alexa Skill to point to a different machine. Whereas I wonder if Codespasces allows you to have this entry point, you could point to like thoughtbot.codespace.github.com or something like that that would then allow you to share that instance. That's something interesting that I think about now. I wonder if you could share Codespace instances amongst each other. I don't know.
STEPH: Yeah, I'm intrigued too. That sounds like it'd be really helpful. So circling back just a bit to where we were talking about wearing different hats in terms of working on client work, and then also working on the team, and then also potentially some sales work as well, I'm curious, how do you balance that transition? How do you balance solving hard problems in a codebase and then also transition to solving hard problems in the management space? How do you make all of that fit cohesively in your day or your week?
ROB: The main thing that somebody said to me recently is that you can only do so much in a day, and it's about the order that you approach those things. And just be content with the fact that you're not going to get everything done. But you have to make sure that you work on things in the right order and just take your time and then work through them. I read a really good book recently that was recommended to me by my coach called Time Off. And it's all about finding your rest ethic, which sounds a bit abstract and a bit weird. But all it is it's about understanding that you can't be working 100% all the time. It's not possible.
As developers, sometimes we can forget that we're creative people, and creativity comes from a part of your brain that works subconsciously. So it's important for you to take breaks throughout the day and kind of go okay; I use the Pomodoro Technique. So I have an app that runs, and every 25 minutes, I just take a little break. I don't use it in the way that it's supposed to be used. I just use it to give me a trigger to have a break every 25 minutes. And so in that time, I'll just step away from my computer. I'll walk to the kitchen, grab a glass of water.
I usually have a magazine or a book next to my table. So I have a magazine here at the moment. I'll just read a page of that just to kind of rest my eyes, so they focus at a different level but also just to get my brain thinking about something else. And it seems counterproductive that like, oh, you're stepping out of what you were doing. But then I find like, oh, I suddenly have a little refresher to like, oh, I need to get back into what I was doing. I know where I've got to go. That thing that I was thinking about now makes a little bit more sense. And even if it's a bigger break, give yourself the license to go for a walk and just kind of clear your head.
And a big thing about going for a walk is not to concentrate on completing the task of walking but to concentrate on the walk itself and taking the things that are happening around you. And let your mind just kind of...you'll sometimes notice that oh, I can hear a bird. But that bird's been chirping for five minutes, and you didn't notice because your mind's kind of going. And if you concentrate on, I just want to complete this walk, that's what I'm out here to do, then you lose that ability to let your mind reset. That's a big thing that I'm working on personally to concentrate on the doing rather than the getting done.
And it ties into the craft of being a software developer because if you concentrate on the actual writing of the code and the best practices that we all believe in, you end up with something better that you don't then have to revisit at a later time. Where if you just try and get something done, you're just going to end up having to come back to it or have to revisit in some other way. I've actually got a blog post coming out soon about notifications on phones. I'm a big believer that your phone belongs to you and that if your work wants you to have work notifications on your phone, then they could buy you a phone just for that purpose.
The only thing where I kind of draw the line is I have notifications for meetings on my phone because I can't think of another way to get those things to ping up at me. And I understand that there are jobs where you do need to have those sorts of notifications, especially things like where you're on call; it's a big thing. But when it comes to things where a manager wants to get a hold of you straight away, from a trust point of view, that's where I think things fall down. And you're questioning, like, okay, why does this person need to get hold of me at 7:00, 8:00, 9:00, 10:00 o'clock at night? And should I be available?
We build by the day at thoughtbot. And so when I find, not when I find but when I talk to people, and they say, "Oh, I was still working at 7:30, 8:00 o'clock," I will say, "Why? You're devaluing your own time at that point because we're not billing any extra for that time. So you're making your craft and your skill...you're cheapening it. And I want them to relish the skills and competencies that they have. That's a big thing for me. We're very lucky at thoughtbot that we can draw a boundary at the end of the day and go, okay, that's it. There's no expectation for me. It is much more difficult at product companies.
But yeah, I think it's something that as an industry, and it's a bigger thing as a society, especially with younger people coming into the industry who have never worked in an office and may never work in an office, that idea of where is the cutoff? For so much of the pandemic, the people I would get concerned about the most are the people whose beds I could see behind them because I'm thinking to myself, you spend at least 16 hours a day in that same room.
And that's going to become the norm for people. And if people don't have those rest periods and those breaks and aren't given the opportunities to do that by their managers, then it's not going to end well. And happy people and fulfilled people do the best jobs from a business point of view. But that's never the way I approach it, but that's what I say to people.
STEPH: I think that's one of the biggest mistakes that I made early on in my career, and even now, I still have to coach myself through it. It's like you said, we are creative people and people in software and in general and not just developers, but it's a creative craft. And I wouldn't step away to take breaks. I just thought if I pushed hard enough, I would figure it out, and then I could get done with my work because I was so focused on getting it done versus the doing, as you'd highlighted earlier.
I haven't really thought about it in that particular light of focusing on this is the thing that I'm working on. And yes, I do want to get it done, but let's also focus on the doing portion of it. And so I wouldn't step away for walks. I wouldn't step away for breaks. And that is something that I have learned the hard way that when I actually gave myself that time to breathe, if I gave myself a moment to relax, then I would come back refreshed and then ready to tackle whatever challenge was in front of me.
And same for keeping a magazine that's near my desk; I have found that if I keep a book or something that I enjoy...because, at some point, my brain is going to look for some rest, like, it happens. That's when we flip open Twitter or Instagram or emails or something because our brain is looking for something easy and maybe a little bit of like brain candy, something to give us a little hit.
And I have found that if I keep something else more intentional by my desk, something that I want to read or that I'm enjoying, then I find that when I am seeking for something that's short that I can look at, that I feel more relaxed and fulfilled from that versus then if I go to Twitter, and then I see a bunch of stuff, I don't like, and then I go back to work. [laughs] And it has the opposite effect of what I actually wanted to do with my downtime.
I love the sound of this book. We'll be sure to include a link in the show notes because it sounds like a really good book to read. And I've also worked on improving the setup with my phone and notifications, where I have compartmentalized all the work-related apps into one folder, and then I keep it on the third screen of my phone.
So if I want to see something that's work-related, it's very intentional of like, I have to scroll past all of the stuff that matters to me outside of work and then get to that work section and then click in that folder to then see like, okay, this is where I have Slack, and Gmail, and Basecamp, and all the other things that I might need for work. And I have found that has really helped me because I do still have the notifications on my phone, but at least putting it on its own screen further away from the home screen has been really helpful.
ROB: Do you find that you still get distracted by that, though, when you're in the flow of doing something else?
STEPH: I don't with my phone. I am a person who ignores my phone really well. I don't know if that's a good thing or a bad thing, [laughs] but it is a truth of who I am where I'm pretty good at ignoring my phone.
ROB: That's a good skill to have. If there's any phone in the room and a notification goes off, my head swivels, and I pivot, and I'm like, oh, yeah, some dopamine hit over there that I can get from looking at somebody else's notification.
STEPH: I have noticed that in the other people that I'm around. Yeah, it's that sound that just triggers people like, oh, I got to look. And even if you know it's not your phone like you heard someone else's phone ding, it still makes you check your phone even though probably there's a part of your brain that recognizes like, that wasn't mine, but I'm still going to check anyways. And I have worked hard to fight that where even if I hear my phone go off, I'm like, okay, cool, I'll get to it. I'll check it when I need to. And I'm that person that whenever apps always ask me, "Can we send you notifications?" I'm like, no, you may not send me notifications. [laughs]
Something else you said that I haven't thought about until just now is the idea that there are some people who have never worked in an office or may never work in an office because we are leaning into more remote jobs. And that is fascinating to me to think about that someone won't have had that experience. But you make such a good point that we need to start thinking about these boundaries now and how we manage our remote work and our home life because this is, going forward, going to be the new norm for a number of people. So how do we go ahead and start putting good practices in place for those future workers?
ROB: One of the things, as we've hired people from a remote point of view who've only worked with thoughtbot remotely, is the idea of visibility. And I don't mean the visibility of I want to see when somebody's working but maybe the invisibility of people. Because you can't see when people are taking breaks, you assume that everybody is working all the time, and so then you don't take those breaks. And so this is something we saw with people who we hired in the first six months of being remote. And they were burning out because they didn't realize that other people were taking breaks. Because they didn't know about the cultural norms of how we worked at thoughtbot.
But people who had worked in the studio would know that people would get up and have breaks. People would get up and go get a coffee from a coffee shop and then have a walk around. They didn't know that that was the culture because they bring the culture from other places with them. But then it's much harder to get people to understand your way of working and how we think that we should approach things when you are sat in isolation in a room with a screen. And that's something that we've had to say to people to break that down.
And even things that we took for granted when we worked in a studio where somebody would get up and ask somebody if they could pair with them even if they weren't on the same project. Somebody might have more Elm knowledge or React Native knowledge, or Elixir knowledge. And you'd get up and say, "Hey, can I borrow some of your time just to go over this thing, to pair?" And everybody would say, "Yeah, yeah, I can find some time. If not now, we can do it later." And recently, we've had people saying, "Oh, is it okay if we pair across projects? Is it okay if we pair with other people?" It's like, "Yeah, pair."
One of the big things we say is that we have this vast amount of knowledge across thoughtbot, across the world that we can tap into and that you can use. And that's just one example of how do you get those core things that you take for granted and help people understand them? Because you don't know what people don't know. And it's all about that implied knowledge. So that's something that we learned. And we try and say to people and instill in them about yeah, take breaks. You can pair with people.
There are people who bring in culture from other places with them. But then, to go back to where you started, how do you start with people who have no culture with them or have the culture of coming from maybe from school, or university, or from a different industry? How do you help those people add to your culture but also learn from your culture at the same time? Big people problems.
STEPH: Have you found any helpful strategies to normalize that take a break culture?
ROB: One thing we tried, but it doesn't last very long because people are lazy, is putting it in Slack saying, "I'm going for a break." And you can do that, but it's so artificial. After a week or two weeks, people just stopped doing it. It was through conversation. We have a regular retrospective as the Launchpad II team where we talk about what is working, what isn't working. And we have such a trusting environment where people will say things along the lines of this isn't working for me, or I feel like I'm burning out. Then we will talk to each other about it and figure out where it comes from.
And it's a good point to raise that I don't think we have explicitly addressed it. But it is something that we will address. I'm not going to say could address; we will address it. I will talk to our latest hire, Dorian, who I have a one-on-one with next week, and to kind of talk to him about it. And we should maybe try and codify that in our handbook somewhere so everybody can learn from it, at least start a strategy and a conversation. Because I don't think it is something that we do talk about. It's the problem of being siloed and being remote and time zones as well.
A lot of stuff that Launchpad I knows Launchpad II doesn't necessarily know because we only have three, maybe, hours if people are based on the East Coast where we overlap. I have meetings with Geronda, who's our DEI Program Manager, and she lives in Seattle. And so sometimes I'll talk to her at 5:00 o'clock, and it's 7:00 o'clock in the morning for her. And you have different energy levels. But yeah, so we spend time to try and figure out how we work together.
STEPH: Yeah, I like that idea of highlighting that we take breaks somewhere that's part of your expectations as part of your role. Like, this is an expectation of your role; you're going to take breaks. You're going to step away for lunch. You're going to stick to a certain set of hours in terms of having like an eight-hour workday with a healthy lunch break in there. I think that's a really good idea.
On the Boost team, I have found that people have adopted the habit of not always but typically sharing of, like, "Hey, I'm stepping away for a coffee break," or "I'm having lunch. Maybe like a late lunch, but I'm taking it," Or "I am stepping away for a walk." You often see later in the afternoon where there are a number of people that are then saying, "Hey, I'm going for a walk."
And I feel that definitely helps me when I see it every day to reinforce like, yes; I should do this too because I already admitted I'm bad at this. So it helps reinforce it for me when I see other people saying that as well. But then I can see that that takes time to build that into a team's culture or to find easy ways to share that. So just putting it upfront in like a role expectation also feels like a really good place to then highlight and then to reinforce it as then people are setting that example.
ROB: One thing that Nick Charlton tried to introduce was a Strava group. There's a thoughtbot Strava group. So you can see if people are members of it that they've been walking and things like that. It was quite an interesting way to automate it. I think it fell off a cliff. But it was something that we did try to how can we make the visibility of this a little bit easier? But yeah, the best thing I've seen is, like you say, having that notification in Slack or somewhere where you can see that other people are stepping away from their keyboards.
STEPH: Well, as you mentioned, solving people problems is totally easy, you know. It's a totally trivial task although I'm sure we could spend too many hours talking about it. All right, so I do have one more very important question for you, Rob. And this goes back to a debate that Chris and I are having, and I'd love to get you to weigh in on it. So there are Pop-Tarts, these things called Pop-Tarts in the world. And I don't know if you're a fan, but if you were given the option to eat a Pop-Tart with frosting or a Pop-Tart without frosting, which one do you think you would choose?
ROB: That's an interesting question. Is there a specific flavor? Because I think that the Strawberry Pop-Tart I would have with frosting but maybe the chocolate one I have without. I know there are all sorts of exotic flavors of Pop-Tarts. But I think I would edge towards with frosting as a default. That's my undiplomatic answer.
STEPH: I like that nuanced answer. I also like how you refer to the flavors as exotic. I think that was very kind of you [laughs] other like melon crushed or wild flavors that they have. Awesome. All right. Well, I think that's a perfect note for us to wrap up.
Rob, thank you so much for coming on the show and for bringing up all of these wonderful ideas and topics and sharing your experience with Codespaces. For folks that are interested in following your work or interested in getting in touch with you, where's the best place for them to do that?
ROB: Yeah, thank you so much for having me. It's been fantastic to have a chat. If people do want to find me, the best place would be on Twitter. So my handle on Twitter is @purinkle which I understand is hard for people to maybe understand via a podcast, but we'll put a link in the show notes so people couldn't find me more easily.
And that's probably also a good time to say that I am actually trying to find a development team lead to join our Launchpad II team. So we are looking for somebody who lives in Europe, Middle East, or Africa to join our team as a developer and manager of two to three people. There's more information on the thoughtbot website, and I do tweet about it very, very often. So feel free to reach out to me if that's of any interest to you.
STEPH: Awesome. We'll be sure to include a link to that in the show notes as well. On that note, shall we wrap up?
ROB: Yeah, let's wrap up.
CHRIS: The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at hosts@bikeshed.fm via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeee!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.Support The Bike Shed

May 24, 2022 • 45min
339: What About Pictures?
Steph has a baby update and thoughts on movies, plus a question for Chris related to migrating Test Unit tests to RSpec.
Chris watched a video from Google I/O where Chrome devs talked about a new feature called Page Transitions. He's also been working with a tool called Customer.io, an omnichannel communication whiz-bang adventure!
Page transitions Overview
Using yield_self for composable ActiveRecord relations
A Case for Query Objects in Rails
Customer.io
Turning the database inside-out with Apache Samza | Confluent
Datomic
About CRDTs • Conflict-free Replicated Data Types
Apache Kafka
Resilient Management | A book for new managers in tech
Mixpanel: Product Analytics for Mobile, Web, & More
Become a Sponsor of The Bike Shed!
Transcript:
CHRIS: Golden roads are golden. Okay, everybody's got golden roads. You have golden roads, yes? That is what we're --
STEPH: Oh, I have golden roads, yes.
[laughter]
CHRIS: You might should inform that you've got golden roads, yeah.
STEPH: Yeah, I don't know if I say might should as much but might could. I have been called out for that one a lot; I might could do that.
CHRIS: [laughs]
STEPH: That one just feels so natural to me than normal. Anytime someone calls it out, I'm like, yeah, what about it?
[laughter]
CHRIS: Do you want to fight?
STEPH: Yeah, are we going to fight?
CHRIS: I might could fight you.
STEPH: I might could. I might should.
[laughter]
CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey.
STEPH: And I'm Steph Viccari.
CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world?
STEPH: Hey, Chris. I have a couple of fun updates. I have a baby Viccari update, so little baby weighs about two pounds now, two pounds. I'm 25 weeks along. So not that I actually know the exact weight, I'm using all those apps that estimate based on how far along you are, so around two pounds, which is novel. Oh, and then the other thing I'm excited to tell you about...I'm not sure how I should feel that I just got more excited about this other thing. I'm very excited about baby Viccari.
But the other thing is there's a new Jurassic Park movie coming out, and I'm very excited. I think it's June 10th is when it comes out. And given how much we have sung that theme song to each other and make references to what a clever girl, I needed to share that with you. Maybe you already know, maybe you're already in the loop, but if you don't, it's coming.
CHRIS: Yeah, the internet likes to yell things like that. Have you watched all of the most recent ones? There are like two, and I think this will be the third in the revisiting or whatever, the Jurassic World version or something like that. But have you watched the others?
STEPH: I haven't seen all of them. So I've, of course, seen the first one. I saw the one that Chris Pratt was in, and now he's in the latest one. But I think I've missed...maybe there's like two in the middle there. I have not watched those.
CHRIS: There are three in the original trilogy, and then there are three now in the new trilogy, which now it's ending, and they got everybody.
STEPH: Oh, I'm behind.
CHRIS: They got people from the first one, and they got the people from the second trilogy. They just got everybody, and that's exciting. You know, it's that thing where they tap into nostalgia, and they take advantage of us via it. But I'm fine. I'm here for it.
STEPH: I'm here for it, especially for Jurassic Park. But then there's also a new Top Gun movie coming out, which, I'll be honest, I'm totally going to watch. But I really didn't remember the first one. I don't know that I've really ever watched the first Top Gun. So Tim, my partner, and I watched that recently, and it's such a bad movie. I'm going to say it; [laughs] it's a bad movie.
CHRIS: I mean, I don't want to disagree, but the volleyball scene, come on, come on, the volleyball scene.
[laughter]
STEPH: I mean, I totally had a good time watching the movie. But the one part that I finally kept complaining about is because every time they showed the lead female character, I can't think of her character name or the actress's name, but they kept playing that song, Take My Breath Away. And I was like, can we just get past the song? [laughs] Because if you go back and watch that movie, I swear they play it like six different times. It was a lot. It was too much. So I moved it into bad movie category but bad movie totally worth watching, whatever category that is.
CHRIS: Now I kind of want to revisit it. I feel like the drinking game writes itself. But at a minimum, anytime Take My Breath Away plays, yeah. Well, all right, good to know. [laughs]
STEPH: Well, if you do that, let me know how many shots or beers you drink because I think it will be a fair amount. I think it will be more than five.
CHRIS: Yeah, it involves a delicate calibration to get that right. I don't think it's the sort of thing you just freehand. It writes itself but also, you want someone who's tried it before you so that you're not like, oh no, it's every time they show a jet. That was too many. You can't drink that much while watching this movie.
STEPH: Yeah, that would be death by Top Gun.
CHRIS: But not the normal way, the different, indirect death by Top Gun.
STEPH: I don't know what the normal way is. [laughs]
CHRIS: Like getting shot down by a Top Gun pilot.
[laughter]
STEPH: Yeah, that makes sense. [laughs]
CHRIS: You know, the dogfighting in the plane.
STEPH: The actual, yeah, going to war away. Just sitting on your couch and you drink too much poison away, yeah, that one. All right, that got weird. Moving on, [laughs] there's a new Jurassic Park movie. We're going to land on that note.
And in the more technical world, I've got a couple of things on my mind. One of them is I have a question for you. I'm very excited to run this by you because I could use a friend in helping me decide what to do. So I am still on that journey where I am migrating Test::Unit test over to RSpec.
And as I'm going through, it's going pretty well, but it's a little complicated because some of the Test::Unit tests have different setup than, say, the RSpec do. They might run different scripts beforehand where they're loading data. That's perhaps a different topic, but that's happening. And so that has changed a few things.
But then overall, I've just been really just porting everything over, like, hey, if it exists in the Test::Unit, let's just bring it to RSpec, and then I'm going to change these asserts to expects and really not make any changes from there. But as I'm doing that, I'm seeing areas that I want to improve and things that I want to clear up, even if it's just extracting a variable name.
Or, as I'm moving some of these over in Test::Unit, it's not clear to me exactly what the test is about. Like, it looks more like a method name in the way that the test is being described, but the actual behavior isn't clear to me as if I were writing this in RSpec, I think it would have more of a clear description. Maybe that's not specific to the actual testing framework. That might just be how these tests are set up.
But I'm at that point where I'm questioning should I keep going in terms of where I am just copying everything over from Test::Unit and then moving it over to RSpec? Because ultimately, that is the goal, to migrate over. Or should I also include some time to then go back and clean up and try to add some clarity, maybe extract some variable names, see if I can reduce some lets, maybe even reduce some of the test helpers that I'm bringing over?
How much cleanup should be involved, zero, lots? I don't know. I don't know what that...[laughs] I'm sure there's a middle ground in there somewhere. But I'm having trouble discerning for myself what's the right amount because this feels like one of those areas where if I don't do any cleanup, I'm not coming back to it, like, that's just the truth. So it's either now, or I have no idea when and maybe never.
CHRIS: I'll be honest, the first thing that came to mind in this most recent time that you mentioned this is, did we consider just deleting these tests entirely? Is that on...like, there are very few of them, right? Like, are they even providing enough value? So that was question one, which let me pause to see what your thoughts there were. [chuckles]
STEPH: I don't know if we specifically talked about that on the mic, but yes, that has been considered. And the team that owns those tests has said, "No, please don't delete them. We do get value from them." So we can port them over to RSpec, but we don't have time to port them over to RSpec. So we just need to keep letting them go on. But yet, not porting them conflicts with my goal of then trying to speed up CI. And so it'd be nice to collapse these Test::Unit tests over to RSpec because then that would bring our CI build down by several meaningful minutes. And also, it would reduce some of the complexity in the CI setup.
CHRIS: Gotcha. Okay, so now, having set that aside, I always ask the first question of like, can you just put Derek Prior’s phone number on the webpage and call it an app? Is that the MVP of this app? No? Okay, all right, we have to build more. But yeah, I think to answer it and in a general way of trying to answer a broader set of questions here...
I think this falls into a category of like if you find yourself having to move around some code, if that code is just comfortably running and the main thing you need to do is just to get it ported over to RSpec, I would probably do as little other work as possible. With the one consideration that if you find yourself needing to deeply load up the context of these tests like actually understand them in order to do the porting, then I would probably take advantage of that context because it's hard to get your head into a given piece of code, test or otherwise.
And so if you're in there and you're like, well, now that I'm here, I can definitely see that we could rearrange some stuff and just definitively make it better, if you get to that place, I would consider it. But if this ends up being mostly a pretty rote transformation like you said, asserts become expects, and lets get switched around, you know, that sort of stuff, if it's a very mechanical process of getting done, I would probably say very minimal.
But again, if there is that, like, you know what? I had to understand the test in order to port them anyway, so while I'm here, let me take advantage of that, that's probably the thing that I would consider. But if not that, then I would say even though it's messy and whatnot and your inclination would be to clean it, I would say leave it roughly as is. That's my guess or how I would approach it.
STEPH: Yeah, I love that. I love how you pointed out, like, did you build up the context? Because you're right, in a lot of these test cases, I'm not, or I'm trying really hard to not build up context. I'm trying very hard to just move them over and, if I have to, mainly to find test descriptions. That's the main area I'm struggling to...how can I more explicitly state what this test does so the next person reading this will have more comprehension than I do? But otherwise, I'm trying hard to not have any real context around it.
And that's such a good point because that's often...when someone else is in the middle of something, and they're deciding whether to include that cleanup or refactor or improvement, one of my suggestions is like, hey, we've got the context now. Let's go with it. But if you've built up very little context, then that's not a really good catalyst or reason to then dig in deeper and apply that cleanup. That's super helpful. Thank you. That will help reinforce what I'm going to do, which is exactly let's migrate RSpec and not worry about cleanup, which feels terrible; I'm just going to say that into the world. But it also feels like the right thing to do.
CHRIS: Well, I'm happy to have helped. And I share the like, and it feels terrible. I want to do the right thing, but sometimes you got to pick a battle sort of thing.
STEPH: Cool. Well, that's a huge help to me. What's going on in your world?
CHRIS: What's going on in my world? I watched a great video the other day from the Google I/O. I think it's an event; I'm not actually sure, conference or something like that. But it was some Google Chrome developers talking about a new feature that's coming to the platform called Page Transitions. And I've kept an eye on this for a while, but it seems like it's more real. Like, I think they put out an RFC or an initial sort of set of ideas a while back. And the web community was like, "Oh, that's not going to work out so well."
So they went back to the drawing board, revisited. I've actually implemented in Chrome Canary a version of the API. And then, in the video that I watched, which we'll include a show notes link to, they demoed the functionality of the Page Transitions API and showed what you can do. And it's super cool. It allows for the sort of animations that you see in a lot of native mobile apps where you're looking at a ListView, you click on one of the items, and it grows to fill the whole screen. And now you're on the detail screen for that item that you were looking at.
But there was this very continuous animated transition that allows you to keep context in your head and all of those sorts of nice things. And this just really helps to bridge that gap between, like, the web often lags behind the native mobile platforms in terms of the experiences that we can build. So it was really interesting to see what they've been able to pull off. The demo is a pretty short video, but it shows a couple of different variations of what you can build with it. And I was like, yeah, these look like cool native app transitions, really nifty.
One thing that's very interesting is the actual implementation of this. So it's like you have one version of the page, and then you transition to a new version of the page, and in doing so, you want to animate between them. And the way that they do it is they have the first version of the page. They take a screenshot of it like the browser engine takes a screenshot of it. And then they put that picture on top of the actual browser page. Then they do the same thing with the next version of the page that they're going to transition to. And then they crossfade, like, change the opacity and size and whatnot between the two different images, and then you're there.
And in the back of my mind, I'm like, I'm sorry, what now? You did which? I'm like, is this the genius solution that actually makes this work and is performant? And I wonder if there are trade-offs. Like, do you lose interactivity between those because you've got some images that are just on the screen? And what is that like? But as they were going through it, I was just like, wait, I'm sorry, you did what? This is either the best idea I've ever heard, or I'm not so sure about this.
STEPH: That's fascinating. You had me with the first part in terms of they take a screenshot of the page that you're leaving. I'm like, yeah, that's a great idea. And then talking about taking a picture of the other page because then you have to load it but not show it to the user that it's loaded. And then take a picture of it, and then show them the picture of the loaded page. But then actually, like you said, then crossfade and then bring in the real functionality. I am...what am I?
[laughter]
CHRIS: What am I actually?
STEPH: [laughs] What am I? I'm shocked. I'm surprised that that is so performant. Because yeah, I also wouldn't have thought of that, or I would have immediately have thought like, there's no way that's going to be performant enough. But that's fascinating.
CHRIS: For me, performance seems more manageable, but it's the like, what are you trading off for that? Because that sounds like a hack. That sounds like the sort of thing I would recommend if we need to get an MVP out next week. And I'm like, what if we just tried this? Listen, it's got some trade-offs. So I'm really interested to see are those trade-offs present? Because it's the browser engine. It's, you know, the low-level platform that's actually managing this. And there are some nice hooks that allow you to control it. And at a CSS level, you can manage it and use keyframe animations to control the transition more directly.
There's a JavaScript API to instrument the sequencing of things. And so it's giving you the right primitives and the right hooks. And the fact that the implementation happens to use pictures or screenshots, to use a slightly different word, it's like, okey dokey, that's what we're doing. Sounds fun. So I'm super interested because the functionality is deeply, deeply interesting to me.
Svelte actually has a version of this, the crossfade utility, but you have to still really think about how do you sequence between the two pages and how do you do the connective tissue there? And then Svelte will manage it for you if you do all the right stuff. But the wiring up is somewhat complicated. So having this in the browser engine is really interesting to me. But yeah, pictures.
STEPH: This is one of those ideas where I can't decide if this was someone who is very new to the team and new to the idea and was like, "Have we considered screenshots? Have we considered pictures?" Or if this is like the uber senior person on the team that was like, "Yeah, this will totally work with screenshots." I can't decide where in that range this idea falls, which I think makes me love it even more. Because it's very straightforward of like, hey, what if we just tried this? And it's working, so cool, cool, cool.
CHRIS: There's a fantastic meme that's been making the rounds where it's a bell curve, and it's like, early in your career, middle of your career, late in your career. And so early in your career, you're like, everything in one file, all lines of code that's just where they go. And then in the middle of your career, you're like, no, no, no, we need different concerns, and files, and organizational structures.
And then end of your career...and this was coming up in reference to the TypeScript team seems to have just thrown everything into one file. And it's the thing that they've migrated to over time. And so they have this many, many line file that is basically the TypeScript engine all in one file. And so it was a joke of like, they definitely know what they're doing with programming. They're not just starting last week sort of thing. And so it's this funny arc that certain things can go through.
So I think that's an excellent summary there [laughs] of like, I think it was folks who have thought about this really hard. But I kind of hope it was someone who was just like, "I'm new here. But have we thought about pictures? What about pictures?" I also am a little worried that I just deeply misunderstood [laughs] the representation but glossed over it in the video, and I'm like, that sounds interesting. So hopefully, I'm not just wildly off base here. [laughs] But nonetheless, the functionality looks very interesting.
STEPH: That would be a hilarious tweet. You know, I've been waiting for that moment where I've said something that I understood into the mic and someone on Twitter just being like, well, good try, but... [laughs]
CHRIS: We had a couple of minutes where we tried to figure out what the opposite of ranting was, and we came up with pranting and made up a word instead of going with praising or raving. No, that's what it is, raving. [laughs]
STEPH: No, raving. I will never forget now, raving. [laughs]
CHRIS: So, I mean, we've done this before.
STEPH: That's true. Although they were nice, I don't think they tweeted. I think they sent in an email. They were like, "Hey, friends."
[laughter]
CHRIS: Actually, we got a handful of emails on that.
[laughter]
STEPH: Did you know the English language?
CHRIS: Thank you, kind Bikeshed audience, for not shaming us in public. I mean, feel free if you feel like it. [laughs]
But one other thing that came up in this video, though, is the speaker was describing single-page apps are very common, and you want to have animated transitions and this and that. And I was like, single-page app, okay, fine. I don't like the terminology but whatever. I would like us to call it the client-side app or client-side routing or something else. But the fact that it's a single page is just a technical consideration that no user would call it that. Users are like; I go to the web app. I like that it has URLs. Those seem different to me. Anyway, this is my hill. I'm going to die on it.
But then the speaker in the video, in contrast to single-page app referenced multi-page app, and I was like, oh, come on, come on. I get it. Like, yes, there are just balls of JavaScript that you can download on the internet and have a dynamic graphics editor. But I think almost all good things on the web should have URLs, and that's what I would call the multiple pages. But again, that's just me griping about some stuff. And to name it, I don't think I'm just griping for griping sake.
Like, again, I like to think about things from the user perspective, and the URL being so important. And having worked with plenty of apps that are implemented in JavaScript and don't take the URL or the idea that we can have different routable resources seriously and everything is just one URL, that's a failure mode in my mind. We missed an opportunity here. So I think I'm saying a useful thing here and not just complaining on the internet. But with that, I will stop complaining on the internet and send it back over to you. What else is new in your world, Steph?
STEPH: I do remember the first time that you griped about it, and you were griping about URLs. And there was a part of me that was like, what is he talking about? [laughter] And then over time, I was like, oh, I get it now as I started actually working more in that world. But it took me a little bit to really appreciate that gripe and where you're coming from. And I agree; I think you're coming from a reasonable place, not that I'm biased at all as your co-host, but you know.
CHRIS: I really like the honest summary that you're giving of, like, honestly, the first time you said this, I let you go for a while, but I did not know what you were talking about. [laughs] And I was like, okay, good data point. I'm going to store that one away and think about it a bunch. But that's fine. I'm glad you're now hanging out with me still.
[laughter]
STEPH: Don't do that. Don't think about it a bunch. [laughs] Let's see, oh, something else that's going on in my world. I had a really fun pairing session with another thoughtboter where we were digging into query objects and essentially extracting some logic out of an ActiveRecord model and then giving that behavior its own space and elevated namespace in a query object. And one of the questions or one of the things that came up that we needed to incorporate was optional filters.
So say if you are searching for a pizza place that's nearby and you provide a city, but you don't provide what's the optional zip code, then we want to make sure that we don't apply the zip code in the where clause because then you would return all the pizza places that have a nil zip code, and that's just not what you want. So we need to respect the fact that not all the filters need to be applied. And there are a couple of ways to go about it. And it was a fun journey to see the different ways that we could structure it.
So one of the really good starting points is captured in a blog post by Derek Prior, which we'll include a link to in the show notes, and it's using yield_self for composable ActiveRecord relations. But essentially, it starts out with an example where it shows that you're assigning a value to then the result of an if statement. So it's like, hey, if the zip code is present, then let's filter by zip code; if not, then just give us back the original relation. And then you can just keep building on it from there.
And then there's a really nice implementation that Derek built on that then uses yield_self to pass the relation through, which then provides a really nice readability for as you are then stepping through each filter and which one should and shouldn't be applied. And now there's another blog post, and this one's written by Thiago Silva, A Case for Query Objects in Rails. And this one highlighted an approach that I haven't used before. And I initially had some mixed feelings about it.
But this approach uses the extending method, which is a method that's on ActiveRecord query methods. And it's used to extend the scope with additional methods. You can either do this by providing the name of a module or by providing a block. It's only going to apply to that instance or to that specific scope when you're using it. So it's not going to be like you're running an include or something like that where all instances are going to now have access to these methods.
So by using that method, extending, then you can create a module that says, "Hey, I want to create this by zip code filter that will then check if we have a zip code, let's apply it, if not, return the relation. And it also creates a really pretty chaining experience of like, here's my original class name. Let's extend with these specific scopes, and then we can say by zip code, by pizza topping, whatever else it is that we're looking to filter by.
And I was initially...I saw the extending, and it made me nervous because I was like, oh, what all does this apply to? And is it going to impact anything outside of this class? But the more I've looked at it, the more I really like it. So I think you've seen this blog post too. And I'm curious, what are your thoughts about this?
CHRIS: I did see this blog post come through. I follow that thoughtbot blog real close because it turns out some of the best writing on the internet is on there. But I saw this...also, as an aside, I like that we've got two Derek Prior references in one episode. Let's see if we can go for three before the end. But one thing that did stand out to me in it is I have historically avoided scopes using scope like ActiveRecord macro thing. It's a class method, but like, it's magic. It does magic.
And a while ago, class methods and scopes became roughly equivalent, not exactly equivalent, but close enough. And for me, I want to use methods because I know stuff about methods. I know about default arguments. And I know about all of these different subtleties because they're just methods at the end of the day, whereas scopes are special; they have certain behavior. And I've never really known of the behavior beyond the fact that they get implemented in a different way. And so I was never really sold on them. And they're different enough from methods, and I know methods well. So I'm like, let's use the normal stuff where we can.
The one thing that's really interesting, though, is the returning nil that was mentioned in this blog post. If you return nil in a scope, it will handle that for you. Whereas all of my query objects have a like, well, if this thing applies, then scope dot or relation dot where blah, blah, blah, else return relation unchanged. And the fact that that natively exists within scope is interesting enough to make me reconsider my stance on scopes versus class methods. I think I'm still doing class method. But it is an interesting consideration that I was unaware of before.
STEPH: Yeah, it's an interesting point. I hadn't really considered as much whether I'm defining a class-level method versus a scope in this particular case. And I also didn't realize that scopes handle that nil case for you. That was one of the other things that I learned by reading through this blog post. I was like, oh, that is a nicety. Like, that is something that I get for free. So I agree. I think this is one of those things that I like enough that I'd really like to try it out more and then see how it goes and start to incorporate it into my process.
Because this feels like one of those common areas of where I get to it, and I'm like, how do I do this again? And yield_self was just complicated enough in terms of then using the fancy method method to then be able to call the method that I want that I was like, I don't remember how to do this. I had to look it up each time. But including this module with extending and then being able to use scopes that way feels like something that would be intuitive for me that then I could just pick up and run with each time.
CHRIS: If it helps, you can use then instead of yield_self because we did upgrade our Ruby a while back to have that change. But I don't think that actually solves the thing that you're describing. I'd have liked the ampersand method and then simple method name magic incantation that is part of the thing that Derek wrote up. I do use it when I write query objects, but I have to think about it or look it up each time and be like, how do I do that? All right, that's how I do that.
STEPH: Yeah, that's one of the things that I really appreciate is how often folks will go back and update blog posts, or they will add an addition to them to say, "Hey, there's something new that came out that then is still relevant to this topic." So then you can read through it and see the latest and the greatest. It's a really nice touch to a number of our blog posts. But yeah, that's what was on my mind regarding query objects. What else is going on in your world?
CHRIS: I have this growing feeling that I don't quite know what to do with. I think I've talked about it across some of our conversations in the world of observability. But broadly, I'm starting to like...I feel like my brain has shifted, and I now see the world slightly differently, and I can't go back. But I also don't know how to stick the landing and complete this transition in my brain. So it's basically everything's an event stream; this feels true. That's life. The arrow of time goes in one direction as far as I understand it. And I'm now starting to see it manifest in the code that we're writing.
Like, we have code to log things, and we have places where we want to log more intentionally. Then occasionally, we send stuff off to Sentry. And Sentry tells us when there are errors, that's great. But in a lot of places, we have both. Like, we will warn about something happening, and we'll send that to the logs because we want to have that in the logs, which is basically the whole history of what's happened. But we also have it in Sentry, but Sentry's version is just this expanded version that has a bunch more data about the user, and things, and the browser that they were in. But they're two variations on the same event.
And then similarly, analytics is this, like, third leg of well, this thing happened, and we want to know about it in the context. And what's been really interesting is we're working with a tool called Customer.io, which is an omnichannel communication whiz-bang adventure. For us, it does email, SMS, and push notifications. And it's integrated into our segment pipeline, so events flow in, events and users essentially. So we have those two different primitives within it. And then within it, we can say like, when a user does X, then send them an email with this copy.
As an aside, Customer.io is a fantastic platform. I'm super-duper impressed. We went through a search for a tool like it, and we ended up on a lot of sales demos with folks that were like, hey, so yeah, starting point is $25,000 per year. And, you know, we can talk about it, but it's only going to go up from there when we talk about it, just to be clear. And it's a year minimum contract, and you're going to love it. And we're like, you do have impressive platforms, but okay, that's a bunch. And then, we found Customer.io, and it's month-by-month pricing. And it had a surprisingly complete feature set.
So overall, I've been super impressed with Customer.io and everything that they've afforded. But now that I'm seeing it, I kind of want to move everything into that world where like, Customer.io allows non-engineer team members to interact with that event stream and then make things happen. And that's what we're doing all the time. But I'm at that point where I think I see the thing that I want, but I have no idea how to get there. And it might not even be tractable either.
There's the wonderful Turning the Database Inside Out talk, which describes how everything is an event stream. And what if we actually were to structure things that way and do materialized views on top of it? And the actual UI that you're looking at is a materialized view on top of the database, which is a materialized view on top of that event stream.
So I'm mostly in this, like, I want to figure this out. I want to start to unify all this stuff. And analytics pipes to one tool that gets a version of this event stream, and our logs are just another, and our error system is another variation on it. But they're all sort of sampling from that one event stream. But I have no idea how to do that.
And then when you have a database, then you're like, well, that's also just a static representation of a point in time, which is the opposite of an event stream. So what do you do there? So there are folks out there that are doing good thinking on this. So I'm going to keep my ear to the ground and try and see what's everybody thinking on this front? But I can't shake the feeling that there's something here that I'm missing that I want to stitch together.
STEPH: I'm intrigued on how to take this further because everything you're saying resonates in terms of having these event streams that you're working with. But yet, I can't mentally replace that with the existing model that I have in my mind of where there are still certain ideas of records or things that exist in the world.
And I want to encapsulate the data and store that in the database. And maybe I look it up based on when it happens; maybe I don't. Maybe I'm looking it up by something completely different. So yeah, I'm also intrigued by your thoughts, but I'm also not sure where to take it. Who are some of the folks that are doing some of the thinking in this area that you're interested in, or where might you look next?
CHRIS: There's the Kafka world of we have an event log, and then we're processing on top of that, and we're building stream processing engines as the core. They seem to be closest to the Turning the Database Inside Out talk that I was thinking or that I mentioned earlier. There's also the idea of CRDTs, which are Conflict-free Replicated Data Types, which are really interesting. I see them used particularly in real-time application.
So it's this other tool, but they are basically event logs. And then you can communicate them well and have two different people working on something collaboratively. And these event logs then have a natural way to come together and produce a common version of the document on either end. That's at least my loose understanding of it, but it seems like a variation on this theme. So I've been looking at that a little bit.
But again, I can't see how to map that to like, but I know how to make a Rails app with a Postgres database. And I think I'm reasonably capable at it, or at least I've been able to produce things that are useful to humans using it. And so it feels like there is this pretty large gap. Because what makes sense in my head is if you follow this all the way, it fundamentally re-architects everything. And so that's A, scary, and B; I have no idea how to get there, but I am intrigued. Like, I feel like there's something there.
There's also Datomic is the other thing that comes to mind, which is a database engine in the Clojure world that stores the versions of things over time; that idea of the user is active. It’s like, well, yeah, but when were they not? That's an event. That transition is an event that Postgres does not maintain at this point. And so, all I know is that the user is active. Maybe I store a timestamp because I'm thinking proactively about this.
But Datomic is like no, no, fundamentally, as a primitive in this database; that's how we organize and think about stuff. And I know I've talked about Datomic on here before. So I've circled around these ideas before. And I'm pretty sure I'm just going to spend a couple of minutes circling and then stop because I have no idea how to connect the dots. [laughs] But I want to figure this out.
STEPH: I have not worked with Kafka. But one of the main benefits I understand with Kafka is that by storing everything as a stream, you're never going to lose like a message. So if you are sending a message to another system and then that message gets lost in transit, you don't actually know if it got acknowledged or what happened with it, and replaying is really hard. Where do you pick up again?
While using something like Kafka, you know exactly what you sent last, and then you're not going to have that uncertainty as to what messages went through and which ones didn't. And then the ability to replay is so important. I'm curious, as you continue to explore this, do you have a particular pain point in mind that you'd like to see improve? Or is it more just like, this seems like a really cool, novel idea; how can I incorporate more of this into my world?
CHRIS: I think it's the latter. But I think the thing that I keep feeling is we keep going back and re-instrumenting versions of this. We're adding more logging or more analytics events over the wire or other things. But then, as I send these analytics events over the wire, we have Mixpanel downstream as an analytics visualization and workflow tool or Customer.io. At this point, those applications, I think, have a richer understanding of our users than our core Rails app. And something about that feels wrong to me.
We're also streaming everything that goes through segment to S3 because I had a realization of this a while back. I'm like, that event stream is very interesting. I don't want to lose it. I'm going to put it somewhere that I get to keep it. So even if we move off of either Mixpanel or Customer.io or any of those other platforms, we still have our data. That's our data, and we're going to hold on to it.
But interestingly, Customer.io, when it sends a message, will push an event back into segments. So it's like doubly connected to segment, which is managing this sort of event bus of data. And so Mixpanel then gets an even richer set there, and the Rails app is like, I'm cool. I'm still hanging out, and I'm doing stuff; it's fine. But the fact that the Rails app is fundamentally less aware of the things that have happened is really interesting to me. And I am not running into issues with it, but I do feel odd about it.
STEPH: That touched on a theme that is interesting to me, the idea that I hadn't really considered it in those terms. But yeah, our application provides the tool in which people can interact with. But then we outsource the behavior analysis of our users and understanding what that flow is and what they're going through. I hadn't really thought about those concrete terms and where someone else owns the behavior of our users, but yet we own all the interaction points. And then we really need both to then make decisions about features and things that we're building next.
But that also feels like building a whole new product, that behavior analysis portion of it, so it's interesting. My consulting brain is going wild at the moment between like, yeah, it would be great to own that. But that the other time if there's this other service that has already built that product and they're doing it super well, then let's keep letting them manage that portion of our business until we really need to bring it in-house. Because then we need to incorporate it more into our application itself so then we can surface things to the user.
That's the part where then I get really interested, or that's the pain point that I could see is if we wanted more of that behavior analysis, that then we want to surface that in the app, then always having to go to a third-party would start to feel tedious or could feel more brittle.
CHRIS: Yeah, I'm definitely 100% on not rebuilding Mixpanel in our app and being okay with the fact that we're sending that. Again, the thing that I did to make myself feel better about this is stream the data to S3 so that I have a version of it. And if we want to rebuild the data warehouse down the road to build some sort of machine learning data pipeline thing, we've got some raw data to work with. But I'm noticing lots of places where we're transforming a side effect, a behavior that we have in the system to dispatching an event.
And so right now, we have a bunch of stuff that we pipe over to Slack to inform our admin team, hey, this thing happened. You should probably intervene. But I'm looking at that, and we're doing it directly because we can control the message in Slack a little bit better. But I had this thought in the back of my mind; it's like, could we just send that as an event, and then some downstream tool can configure messages and alerts into Slack? Because then the admin team could actually instrument this themselves. And they could be like; we are no longer interested in this event. Users seem fine on that front. But we do care about this new event.
And all we need to do as the engineering team is properly instrument all of that event stream tapping. Every event just needs to get piped over. And then lots of powerful tools downstream from that that can allow different consumers of that data to do things, and broadly, that dispatch events, consume them on the other side, do fun stuff. That's the story. That's the dream. But I don't know; I can't connect all the dots. It's probably going to take me a couple of weeks to connect all these dots, or maybe years, or maybe my entire career, something like that. But, I don't know, I'm going to keep trying.
STEPH: This feels like a fun startup narrative, though, where you start by building the thing that people can interact with. As more people start to interact with it, how do we start giving more of our team the ability to then manage the product that then all of these users are interacting with? And then that's the part that you start optimizing for. So there are always different interesting bits when you talk about the different stages of Sagewell, and like, what's the thing you're optimizing for? And I'm sure it's still heavily users. But now there's also this addition of we are also optimizing for our team to now manage the product.
CHRIS: Yes, you're 100%. You're spot on there. We have definitely joked internally about spinning out a small company to build this analytics alerting tool [laughs] but obviously not going to do that because that's a distraction. And it is interesting, like, we want to build for the users the best thing that we can and where the admin team fits within that. To me, they're very much customers of engineering. Our job is to build the thing for the users but also, to be honest, we have to build a thing for the admins to support the users and exactly where that falls.
Like, you and I have talked a handful of times about maybe the admin isn't as polished in design as other things. But it's definitely tested because that's a critical part of how this application works. Maybe not directly for a user but one step removed for a user, so it matters. Absolutely we're writing tests to cover that behavior. And so yeah, those trade-offs are always interesting to me and exploring that space. But 100%: our admin team are core customers of the work that we're doing in engineering. And we try and stay very close and very friendly with them.
STEPH: Yeah, I really appreciate how you're framing that. And I very much agree and believe with you that our admin users are incredibly important.
CHRIS: Well, thank you. Yeah, we're trying over here. But yeah, I think I can wrap up that segment of me rambling about ideas that are half-formed in my mind but hopefully are directionally important. Anyway, what else is up with you?
STEPH: So, not that long ago, I asked you a question around how the heck to manage themes that I have going on. So we've talked about lots of fun productivity things around managing to-dos, and emails, and all that stuff. And my latest one is thinking about, like, I have a theme that I want to focus on, maybe it's this week, maybe it's for a couple of months. And how do I capture that and surface it to myself and see that I'm making progress on that? And I don't have an answer to that.
But I do have a theme that I wanted to share. And the one that I'm currently focused on is building up management skills and team lead skills. That is something that I'm focused on at the moment and partially because I was inspired to read the book Resilient Management written by Lara Hogan. And so I think that is what has really set the idea. But as I picked up the book, I was like, this is a really great book, and I'd really like to share some of this. And then so that grew into like, well, let's just go ahead and make this a theme where I'm learning this, and I'm sharing this with everyone else.
So along that note, I figured I would share that here. So we use Basecamp at thoughtbot. And so, I've been sharing some Basecamp posts around what I'm learning in each chapter. But to bring some of that knowledge here as well, some of the cool stuff that I have learned so far...and this is just still very early on in the book. There are a couple of different topics that Laura covers in the first chapter, and one of them is humans’ core needs at work. And then there's also the concept of meeting your team, some really good questions that you can ask during your first one-on-one to get to know the person that then you're going to be managing.
The part that really resonated with me and something that I would like to then coach myself to try is helping the team get to know you. So as a manager, not only are you going out of your way to really get to know that person, but how are you then helping them get to know you as well? Because then that's really going to help set that relationship in regards of they know what kind of things that you're optimizing for. Maybe you're optimizing for a deadline, or for business goals, or maybe it's for transparency, or maybe it would be helpful to communicate to someone that you're managing to say, "Hey, I'm trying some new management techniques. Let me know how this goes." [chuckles]
So there's a healthier relationship of not only are you learning them, but they're also learning you. So some of the questions that Laura includes as examples as something that you can share with your team is what do you optimize for in your role? So is it that you're optimizing for specific financial goals or building up teammates? Or maybe it's collaboration, so you're really looking for opportunities for people to pair together.
What do you want your teammates to lean on you for? I really liked that question. Like, what are some of the areas that bring you joy or something that you feel really skilled in that then you want people to come to you for? Because that's something that before I was a manager...but it's just as you are growing as a developer, that's such a great question of like, what do you want to be known for? What do you want to be that thing that when people think of, they're like, oh, you should go see Chris about this, or you should go see Steph about this?
And two other good questions include what are your work styles and preferences? And what management skills are you currently working on learning or improving? So I really like this concept of how can I share more of myself? And the great thing about this book that I'm learning too is while it is geared towards people that are managers, I think it's so wonderful for people who are non-managers or aspiring managers to read this as well because then it can help you manage whoever's managing you. So then that way, you can have some upward management.
So we had recent conversations around when you are new to a team and getting used to a manager, or maybe if you're a junior, you have to take a lot of self-advocacy into your role to make sure things are going well. And I think this book does a really good job for people that are looking to not only manage others but also manage themselves and manage upward. So that's some of the journeys from the first chapter. I'll keep you posted on the other chapters as I'm learning more. And yeah, if anybody hasn't read this book or if you're interested, I highly recommend it. I'll make sure to include a link in the show notes.
CHRIS: That was just the first chapter?
STEPH: Yeah, that was just the first chapter.
CHRIS: My goodness.
STEPH: And I shortened it drastically. [laughs]
CHRIS: Okay. All right, off to the races. But I think the summary that you gave there, particularly these are true when you're managing folks but also to manage yourself and to manage up, like, this is relevant to everyone in some capacity in some shape or form. And so that feels very true.
STEPH: I will include one more fun aspect from the book, and that's circling back to the humans' core needs at work. And she references Paloma Medina, a coach, and trainer who came up with this acronym. The acronym is BICEPS, and it stands for belonging, improvement, choice, equality, predictability, and significance. And then details how each of those are important to us in our work and how when one of those feels threatened, then that can lead to some problems at work or just even in our personal life.
But the fun example that she gave was not when there's a huge restructuring of the organization and things like that are going on as being the most concerning in terms of how many of these needs are going to be threatened or become vulnerable. But changing where someone sits at work can actually hit all of these, and it can threaten each of these needs. And it made me think, oh, cool, plus-one for being remote because we can sit wherever we want. [laughs]
But that was a really fun example of how someone's needs at work, I mean, just moving their desk, which resonates, too, because I've heard that from other people. Some of the friends that I have that work in more of a People Ops role talk about when they had to shift people around how that caused so much grief. And they were just shocked that it caused so much grief. And this explains why that can be such a big deal. So that was a fun example to read through.
CHRIS: I'm now having flashbacks to times where I was like, oh, I love my spot in the office. I love the people I'm sitting with. And then there was that day, and I had to move. Yeah, no, those were days. This is true.
STEPH: It triggered all the core BICEPS, all the things that you need to work. It threatened all of them. Or it could have improved them; who knows?
CHRIS: There were definitely those as well, yeah. Although I think it's harder to know that it's going to be great on the way in, so it's mostly negative. I think it has that weird bias because you're like, this was a thing, I knew it. I at least understood it. And then you're in a new space, and you're like, I don't know, is this going to be terrible or great? I mean, hopefully, it's only great because you work with great people, and it's a great office. [laughs] But, like, the unknown, you're moving into the unknown, and so I think it has an inherent at least questioning bias to it.
STEPH: Agreed. On that note, shall we wrap up?
CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at hosts@bikeshed.fm via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeee!!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.Support The Bike Shed

May 17, 2022 • 46min
338: Meticulously Wrong
Chris switched from Trello over to Linear for product management and talks about prioritizing backlogs.
Steph shares and discusses a tweet from Curtis Einsmann that super resonated with the work she's doing right now: "In software engineering, rabbit holes are inevitable. You will research libraries and not use them. You'll write code just to delete it. This isn't a waste; sometimes, you need to go down a few wrong paths to get to the right one."
This episode is brought to you by BuildPulse. Start your 14-day free trial of BuildPulse today.
Linear
Curtis Einsmann Tweet
Louie Bacaj Tweet
Become a Sponsor of The Bike Shed!
Transcript:
AD: Flaky tests take the joy out of programming. You push up some code, wait for the tests to run, and the build fails because of a test that has nothing to do with your change. So you click rebuild and you wait. Again. And you hope you're lucky enough to get a passing build this time.
Flaky tests slow everyone down, break your flow and make things downright miserable.
In a perfect world, tests would only break if there's a legitimate problem that would impact production. They'd fail immediately and consistently, not intermittently. But the world's not perfect, and flaky tests will happen, and you don't have time to fix them all today. So how do you know where to start?
BuildPulse automatically detects and tracks your team's flaky tests. Better still, it pinpoints the ones that are disrupting your team the most. With this list of top offenders, you'll know exactly where to focus your effort for maximum impact on making your builds more stable. In fact, the team at Codecademy was able to identify their flakiest tests with BuildPulse in just a few days. By focusing on those tests first, they reduced their flaky builds by more than 68% in less than a month!
And you can do the same because BuildPulse integrates with the tools you're already using. It supports all the major CI systems, including CircleCI, GitHub Actions, Jenkins, and others. And it analyzes test results for all popular test frameworks and programming languages, like RSpec, Jest, Go, pytest, PHPUnit, and more.
So stop letting flaky tests slow you down. Start your 14-day free trial of BuildPulse today. To learn more, visit buildpulse.io/bikeshed. That's buildpulse.io/bikeshed.
CHRIS: Good morning, and welcome to Tech Talk with Steph and Chris. Today at the top of the hour, it's tech traffic hits.
STEPH: Ooh, tech traffic. [laughs] I like that statement.
CHRIS: Yeah. The Git lanes are clogged up with...I don't know. I got nothing.
STEPH: [laughs]
Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari.
CHRIS: And I'm Chris Toomey.
STEPH: And together, we're here to share a bit of what we've learned along the way. So, hey, Chris, what's new in your world?
CHRIS: What's new in my world? Actually, I have a specific new thing that I can share, which is, as of the past week, I would say, switched from Trello over to Linear for product management. It's been great. It was a super straightforward transfer. They actually had an importer. We lost some of the comment threads on the Trello cards. But that was easy enough to like each Linear ticket has a link back to Trello. So it's easy enough to keep the continuity.
But yeah, we're in a whole new world, a system actually built for maintaining a product backlog, and, man, it shows. Trello was a bunch of lists and cards and stuff that you could link between, which was cool. But Linear is just much more purpose-built and already very, very nice. And we're very happy with the switch.
STEPH: I feel like you came in real casual with that news, but that is big news, that you did a switch.
[laughter]
CHRIS: How are you going to bury the lead like that? You switched project management...[laughter] I actually didn't think it was...I'm excited about it but low-key excited, which is weird because I do like productivity and task management software. So you would think I would be really excited about this. But I've also tried enough of them historically to know that that's never going to be the thing that actually makes or breaks your team's productivity. It can make things worse, but it can't make you great. That's the thing that I believe. And so it's a wonderful piece of software. I'm very excited about it but --
STEPH: Ooh, I like that. It can make you worse, but it doesn't make you great. That's so true, yeah, where it causes pain. Well, and it does make sense. You've been complaining a bit about the whole login with Trello and how that's been frustrating. But I haven't even heard of Linear. That's just...that's, I mean, you're just doing what you do where you bring that new-new. I haven't heard of Linear before.
CHRIS: I try to live on the cutting edge. Actually, I deeply try to not live on the cutting edge at this point in my life. That early adopter wave, no, no, no, that's not for me anymore. But I've known a few folks who've moved to Linear. And everyone that I've spoken to who has moved to it has been like, "Yeah, it's been great." I've not heard anything negative. And I've heard or experienced negative things about every other product management tool out there. And so, it seemed like an easy thing.
And it was a low-cost enough switch in terms of opportunity costs or the like, it took the effort of someone on our team moving those cards over and setting up the new system and training, but it was relatively straightforward. And yeah, we're super happy with it. And it feels different now. I feel like I can see the work in a different way which is interesting.
I think we had brought in a Chrome extension for Trello. I think it's like Hello Epics or something like that that allows...it abuses the card linking functionality in Trello and repurchases that feature as an epic management thing. But it's quarter-baked is how I would describe it, or it's clearly built on top of existing things that were not intended to be used exactly in that way. So it does a great job. Hello Epics does a great job of trying to make something like parent-child list management stuff happen in Trello. But it's always going feel like an afterthought, a secondary feature, something that's bolted on.
Whereas in Linear, it's like, no, no, we absolutely have the idea of projects, of course, and you can see burndown charts and things. And the thing that I do want to be careful about is not leaning too much into management. Linear has the idea of cycles or sprints, as many other folks think of them, or iterations or whatever you want to call them. But we've largely not been working in that mode. We've just continued to work through the next up list; that's it. The next up list should be prioritized and well defined at the top and roughly in priority order. So just pick up the next card and work on it. And we just do that every single day.
And now we're in a piece of software that has the idea of cycles, and I'm like, oh, this is vaguely interesting. Do we want to do that? Oh, but if you're going to do that, you probably do some estimation, right? And I was like, oh no, now we're into a place that's...okay, I have feelings. I got to decide how to approach that. And so, I am intrigued. And I wonder if we could just say like ten carts that's how many come into a cycle, and that's it. And we use the loosest heuristics possible to define how we structure a cycle so that we don't fall into the trap of, oh, what's our roadmap going to look like six months from now? JK, what's anything going to look like six months from now? That's not a knowable fact.
STEPH: I was just thinking where you said that you're moving into sprints or cycles, and then there's that push, well, now you got to estimate. And I just thought, do you? Do you have to estimate? [laughs]
CHRIS: We need a burndown chart through 2024, and it must be meticulously accurate down to the hour.
STEPH: I think meticulously wrong is how that goes. [laughs]
CHRIS: Which is the best kind of wrong. If you're going to be wrong, be meticulous about it.
STEPH: Be thorough about it. [laughs] Yeah, the team that I'm on right now, we have our bi-weekly planning, and we go through the board, and we pull stuff in. But there's never a discussion about estimation. And I hadn't really appreciated that until just now. How we don't think about how long is this going to take? We just talked about, well, what's in-flight? And how much work do people still have going on? And then here's the list of things we can pull in. But there's always a list that you can go back to.
Like, it's very...we pull in the minimum and knowing that if we run out of work, there's another place to go where there's stuff that's organized. And I just love that cadence, that idea of like, let's not try to make guesses about the future; let's just have it lined up and ready for us to go when we're ready to pull it in. Although I know, that's also coming from a very developer's perspective, and there are product managers who are trying to communicate as to when features are going to get out into the world. So I get that there's a balance, but I still have strong feelings and hesitations around estimating work.
CHRIS: Well, I feel like there is a balance there. And so many things in history are like, well, this is an overcorrection against that, and that's an overcorrection against this. And the idea that we can estimate our work that far out into the future that's just obviously false to me based on every project I've ever worked on that has tried to do it. And it has always failed without question.
But critically, there is the necessity to sync up work and like, oh, marketing needs to plan the launch of this feature, and it's a critical one. What's it going to look like? When's it going to be ready? You know, we're trying to go for an event, it's not just know...we developers never estimate anything past the immediate moment where like, that's not acceptable. We got to find a middle ground here. But where that middle ground is, is interesting. And so, just operating in the sort of we do work as it comes up is the easiest thing because no one's lying about anything at that point.
But sometimes you got to make some guesses and make some estimations. And then it gets into the murky area of I believe with 75% confidence that in three weeks, we will have this feature ready. But to be clear, I said with 75% confidence that means one-quarter of the time; we will not be there at that date. What does that mean? What does that failure mode look like? Let's talk about that. And can you have honest, open, transparent, useful conversations there? That's the space that it becomes more subtle if you need to do that.
STEPH: You're reminding me of a conversation that I had with someone where they shared with me some very aggressive team goals. And it was a very friendly conversation. And they're like, "How do you feel about aggressive goals?" And I was like, "Well, it depends. How do you feel about aggressive failure?" Because then once I know where you stand there, then we can talk about aggressive goals. Now, if we're being aggressive, but then we fail to achieve that, and it's one of those, okay, we didn't meet the goal that we'd expected, but everything is fine, and it's not a big deal, then I am okay. Sure, let's shoot for the stars.
But if it's one of those, we are communicating these goals to the outside world, and it's going to become incredibly important that we meet these goals, and if we don't, then things are going to go on fire, people are going to be in trouble, and it's just going to be awful, then let's not set aggressive goals. Let's not box ourselves into a space where we are setting ourselves up to fail or feel pain in a meaningful way. I agree that estimations are important, especially in terms of you need to collaborate with other departments, and then also just to provide some sense of where the product is headed and when things may be released.
I think estimations then just become problematic when they do become definite, and they're based on so many unknowns, and then when I don't know is not an answer. So if someone asked, "What's your estimate for this?" And the very honest real answer is I don't know, like, we haven't done this type of work before, or these are all the unknowns, and then someone's like, "Well, let's just put an estimation of like two weeks on it," and they just sort of try to force-fit it into being what they want, then that's where it starts to just feel incredibly problematic.
CHRIS: Yeah, estimation is a very murky area that we could spend entire episodes talking about, and in fact, I think we have a handful of times. So with that, Linear has been great. We're going to see just how much or how little estimation we actually want to do. But it's been a very nice addition to the toolset. And I'll let you know as we deepen our usage of it what the experience is like, but that's the main thing that's new in my world. What's new in your world?
STEPH: Well, before we bounce over to my world, you said something that has intrigued me that has also made me start reflecting on some of the ways that I like to work. And you'd mentioned that you have this prioritized backlog that people are pulling tickets from. And I don't know exactly if there's a planning session or how that looks, but I have recognized that when I am working with a team, and we don't have any planning session, if everybody is just pulling from this backlog, that's being prioritized by someone on the team, that I find that a bit overwhelming.
Because the types of work being done can vary so drastically that I find I'm less able to help my colleagues or my teammates because I don't have the context for what they're working on. It surprises me. I'm like, oh, I didn't even know we're working on that feature, or I don't have the context for what's the problem that we're trying to solve here. And it makes it just a lot harder to review and then have conversations with them. And I get overwhelmed in that environment.
And I've recognized that about myself based on previous projects that were more similar to that versus if I'm on a project where the team does get together every so often, even if it's high level to be like, hey, here's the theme of the tickets that we're working on, or here's just some of the stuff, then I feel much more prepared for the work that is coming in and to be able to context switch and review. And yeah, so I've kind of learned that about myself. I'm curious, are you similar, or how does that work for you?
CHRIS: I'm definitely similar. And I think probably the team is closer to what you're describing. So we do have a planning session every week, just a quick 30-minute scan through the backlog, look at the things that are coming up and also the larger themes. Previously, Epics and Trello now projects and Linear. But talking about what are the bigger pieces of work that we're moving on, and then what are the individual tickets associated with that that we'll be expecting to work on in the next week? And just making sure that everyone has broad clarity around what that feature set is.
Also, we're a very small team at this point. Still, we're four people total, but one of the developers is on a break for a couple of weeks this summer. And so there are really only three of us that are driving on the code. And so, with three of us working on the projects, we try very intentionally to have significant overlap between the various...like, we don't want any one person to own any portion of things at this point. And so we're doing a good amount of pairing to cross-pollinate and make sure everyone's...not everyone's aware of everything, but at least one other person is sufficiently aware of everything between the three of us. And I think that's been working well.
I don't think we have any major gaps, save for the way that we're doing our mobile architecture that's largely managed by one of the developers on the team and a contractor that we're working with to help do a lot of the implementation. That's a known we chose to silo that thing. We've accepted the cost of that for now. And architecturally, the rest of us are aware of it, but we're not like in the Swift code writing anything because I don't know how to write Swift at this point. I'd love to learn it. I hear good things about the language.
[12:26]
So yeah, I think conceptually very similar to what you're describing of still want to have people be able to review. Like, I don't want to put up a PR and people be like, I don't know, that looks like code, I guess. I'm not sure what it does. Like, I want it to be very...I want us all to be roughly on the same page, and thus far, we are.
As the team grows, that will become trickier to maintain because there are just inherently probably more things that are moving, more different feature areas and surface area that we're tackling in any given week, or there are different ways to approach that. I know you've talked about having a limited number of themes for a given cycle, so that's an idea that can pop up. But that's something that we'll figure out as we get further. I think I'm happy with where we're at right now, so yeah, that's the story there.
STEPH: Okay, cool. Yeah, all of that resonates with me, and thinking about it a little more deeply in this moment, I'm realizing I think something you said helped me put this together where when I'm reviewing someone's change, I'm not necessarily just looking to see does your code work? I'm going to trust you that your code works. I may have thoughts about design and other things, but I really want to understand more what's the change to the product that we're making? What's the goal that we're looking to achieve? How are we measuring this?
And so if I don't have that context, that's what contributes to that feeling of like, hard context switching of not just context switching, but now I have to level myself up to then understand the problem that's being solved by this. Versus had I known some of the themes going into that particular cycle or sprint, I would have felt far more prepared for that review session versus having to then dig through all the data and/or tickets or talk to someone.
Well, switching back to what's going on in my world, I have a particular tweet that I want to share, and it's one that Joël Quenneville brought to my attention. And it just resonates so much with all the type of work that I'm doing right now. So I'm going to read the tweet, and then we'll link to it in the show notes as well. But it's from Curtis Einsmann, and Curtis wrote: "In software engineering, rabbit holes are inevitable. You will research libraries and not use them. You'll write code just to delete it. This isn't a waste; sometimes, you need to go down a few wrong paths to get to the right one."
And that describes all the work that I'm doing right now. It's a lot of exploratory, a lot of data-driven work, and finding ways that we can reduce the time that it takes to run RSpec on CI. And it also ties in nicely to one of the things that I think we talked about last week, where we discovered that a number of files have a high runtime variance. And I've really dug into the data there to understand, okay, is it always specific files that have these high runtime variants? Are there any obvious contributions to what's causing this? Are we making real network calls that then could sometimes take a long time to return? And the result is there's nothing obvious.
They're giant files. The number of SQL commands that are being run for each file varies drastically. They're all high, but it's still very different. There's no single fact about these files that has really been like, yes, this is what's causing these files to have such a runtime variance. And so while I've been in the data, I'm documenting it, and I'm making a list and putting it all together in a ticket so at least it's there to look at later. But I'm going to move on. It's one of those I would love to know what's causing this. I would love to address it because it would put us in an ideal state for how we're distributing tests, which would have a significant impact on our runtime.
But it also feels a little bit like chasing my tail because I'm worried, like with some of the other experiments that we've done in the past where we've addressed tentpoles, that as soon as you address the issue for one or two files, then other files start having the same problem. And you're just going to continue to chase and chase, and I don't want to be in that. So upfront, this was one of those; hey, let's look at the data. If there's something obvious, let's address it; if not, move on. So I'm at that point today where I'm wrapping up all of that data, and then I'm going to move on, move on to the next thing.
CHRIS: There's deep truth in that tweet that you shared at the start of this segment. The idea like if we knew the work that we had to do at the front, yeah, we would just do that, but often, we don't. And so, being able to not treat it as a failure when something doesn't work out is, I think, so critical. I think to expand on the idea just a tiny bit, the idea of the scientific method, failure is totally an option and is part of science.
I remember watching MythBusters, and Adam Savage is just kind of like, "Failure is always an option," and highlighting that as part of it. Like, it's an outcome. You've learned something. You have a new data point. You can take that and then carry it forward with you. But it's rough in the moment. And so, I do think that this is a worthwhile thing to meditate on. And it's something that I've had to revisit a handful of times in my career of just like, man, I feel like I've just been spinning my tires all week. I'm like, we know what we want to get done, but just each approach I take isn't working for one reason or another.
And then, finally, you get to the end. And then you've got this paragraph-long summary of all the things that didn't work in your PR and one-line change sort of thing. And those are painful, but they're part of the game. Like, that is unavoidable. I have not found a way to just know how to do the work upfront always. I would love that. I would sign up for whatever seminar was selling that. I wouldn't. I would know that that seminar is a lie, actually. But broadly, I'm intrigued by the idea if someone were selling that, I'd be like, well, I mean, pitch me on it. Tell me why I should believe you; I don't, just to be clear. But yeah.
STEPH: This project has really helped me embrace always setting a goal or a question upfront about what I'm wanting to achieve or what I'm looking to answer because a number of times while Joël and I have been spelunking through data...And then so originally, with the saga, we started out with why doesn’t our math match reality? We understand that if these tests are distributed perfectly across the CPUs, then that should cut the runtime in half. But yet, we weren't seeing that even though we had addressed the tentpoles.
So we dug into understanding why. And the answer is because they're not perfectly distributed, and it's because of the runtime variance. And that was a critical moment to look back and say, "Did we achieve the goal?" Yes, we identified the problem. But once you see a problem, it's just so easy to dig in and keep going. It's like, well, now I want to know what's causing these files to have a runtime variance.
But it's one of those we achieved our goal. We acknowledged upfront that we wanted to at least understand why. Let's make a second decision, do we keep going? And I'm at that point where, frankly, I probably dug in a little more than I should because I'm stubborn. But I'm recognizing that now's the time to back away and then go back and move on to the next high-priority item, which is converting for funsies; I'll share.
The next thing is converting Test::Unit test over to RSpec because we have, I think, around 25 tests that are written in Test::Unit. And we want to move them over to RSpec because that particular just step in the build process takes a good three to four minutes. And part of that is just booting up Rails and then running the tests very fast. And we're underutilizing the machine that's running them because it's only 25 tests, but there are 86 CPUs to run it.
So we'd really like to combine those 25 tests with the rest of the RSpec suite and drop that step. So then that should add minimal time to the overall build but then should take us down at least a couple of minutes. And then also makes it easier to manage and help folks so that way, there's one consistent testing framework that's in use versus having to manage some of these older tests.
CHRIS: It's funny; I think it was just two episodes back where we talked about why RSpec, and I think both of us were just like, well yeah. But I mean, if there are tests and another, like, it's fine, you just leave them with the exception that if there's like 2% of our tests are in Test::Unit, and everything else is in RSpec, yeah, maybe that that conversion efforts seem totally worth it.
But again, I think as you're describing that, what I'm hearing is just like the scientific method, just being somewhat structured in the approach to what's the hypothesis? And what's the procedure we're going to use to determine if that hypothesis is true or false? And then what do we do? And then what are the results? And then you just do that on loop. But being not just sort of exploring. Sometimes you have to be on exploratory mode. But I definitely find that that tiny bit of rigor of just like, wait, okay, before I actually do anything, what do I think is going on here? What's my guess?
And then, okay, if that guess were true, what would I be able to observe in the world? Okay, here we go. And just that tiny bit of structure is so...it sometimes feels highly formal to go into that mode and be like, no, no, no, let me take a step back. Let me write down my thoughts. I'm going to have a little checklist and do the thing. But I've never regretted doing it. I will say that. I have deeply regretted not doing it. I feel like I should make a list of things that fit that structure.
I've never regretted committing in Git ever. That's been great. I've always been able to unwind it, but I've never been able to not unwind it or the opposite. I've regretted not committing. I have not regretted committing. I have regretted not writing out my hypothesis or approach. I have not regretted doing it. And so, yeah, this feels like it falls firmly in that category of like, it's worth just a tiny bit of structure. There's a reason it is the scientific method.
STEPH: Yeah, I agree. I've not regretted documenting upfront what it is I look to achieve and how I think I'm going to answer the question. That has been immensely helpful. Because then I also forget, like, two weeks ago, I'll be like, wasn't there a question around why this was happening, and I need to understand? And all of that was so context-heavy that as soon as I'm out of the thick of it, I completely forget it. So if I care about it deeply or if I want to be able to revisit it, then I need to document it for myself.
It's given me a lot of empathy for people who do more scientific research around, oh my gosh, like, you have to document everything you do and then still be able to prove it five years from now or however long. I'm just throwing out numbers. And it needs to be organized enough that someone, if they do question your research that, then you have it there. My research documents would not withstand scrutiny at this point because they are still just more personal notes. But yes, it's given me a lot of empathy and respect for people who do run very serious research, experiments, and trials, and then have to be able to prove it to the world.
Pivoting just a bit, there's a particular topic that resonated with both you and I; in fact, it's a particular tweet. And, Louie, I do apologize if I mispronounce your last name, but Louie Bacaj. And we'll include a link in the show notes to the tweet, but Louis shared, "I managed multiple engineering teams before quitting tech. Now that I quit, I can speak freely. Here are 12 things your manager may not be telling you, but I know for a fact will help you."
So there are a number of interesting discussions and comments that are in this thread. The one thing in particular that really caught my attention is number 10, and it's "Advocate for junior developers." So they said that their friend reminded them that just because you don't have 10-plus years of experience does not mean that they won't be good. Without junior engineers on the team, no one will grow. Help others grow; you'll grow too.
And that's the part that I love so much is that without junior engineers on the team, no one will grow because that was very thought-provoking for me. It's something that I find that I agree with deeply, but I hadn't really considered why I agree with that so much. So I'm excited to dive into that topic with you. And then, as a second topic to go along with that is, can juniors start with a remote team? I think that's one of the other questions when you and I were chatting about this. And I'm intrigued to hear your thoughts.
CHRIS: A bunch of stuff there. Starting with the tweet, I love elements of this. Some of it feels like it's intentionally somewhat provocative. So like, without junior engineers on the team, no one will grow. That feels maybe a little bit past the bar because I think we can technically grow, and we can build things and whatnot. But I think what feels deeply true to me is truly help others grow; you'll grow too. The act of mentoring, of guiding, of training, of helping someone on their journey will inherently help you grow and, I think, change the way that you think about the work.
I think the beginner mind, the earlier in the career folks coming into a codebase, they will see things fundamentally differently in a really useful way. It's possible that along your career, you've just internalized things. You've been like, yeah, no, that was weird. But then I smashed my head against it for a while, and now I understand this thing. And it just makes sense to me. But it's like, that thing actually doesn't make sense. You have warped your mind to match the thing, not, quote, unquote, "come to understand it." This is sounding too judgmental to people who've been in the industry for a while, but I found this of myself.
Or I can just take for granted things that took a long time to adapt my head to, and if anything, maybe I should have pushed back a little more. And so, I find that junior engineers can be a really fantastic lens for the complexity of a project. Like, the world is truly a complex place, and that's just true. But our job as software engineers is to tame that complexity and manage it. And so, I love the mindset that can come or the conversations that can come out of that.
And it's much like test-driven development is a pressure on the complexity of your code, having junior engineers join the team and needing to explain how all of the different features work, and what the overall architecture is, and the message passing under this and that, it's a really useful conversation to have. And so that "Help others grow; you'll grow too" feels deeply, deeply true to me.
STEPH: Yeah, I couldn't agree more in regards to how juniors really help our team and especially, as you mentioned, with complexity and ¬having those conversations. Some of the other things that have come to mind for me as well around the importance of having junior developers on your team...and maybe it's not specifically they're junior developers but that you just have a variety of experience on your team. It's going to help you lean into a culture of learning because you have people that are at different stages of their career.
And so you want an environment where people can learn together, that they can fail together, and they can be public about it. And having people that are at different stages of their career will lead, well, at least ideally, it'll lead to more pair programming. It's going to lead to more productive code reviews because then people can ask more questions around why did you choose this, or why are you doing that? Versus if everybody is at the same level, then they may just intuitively have reasons that they think someone did something.
But it takes someone that's a bit new to say, "Hey, why did you choose this?" or to bring up some other ideas or ways that they could pursue it. They may bring in new ideas for, like, why has the team always done something this way? Let's think about new ways that we could do this. Or maybe this is really unfriendly, the way that we're doing this, not just for junior people but for people that are new to the team.
And then there's typically less knowledge siloing because then you're going to want to pair the newer folks with the more experienced folks. So that way, you don't have this more senior developer who's then off in a corner working by themselves. And it's going to improve your communication skills. There's just...I realized I'm just rambling because I feel like there are so many benefits that go along with having a variety of people on your team, especially in terms of experience. And that just leads to such a better learning environment and, ultimately, better software and better products.
And yet, I find that so many companies won't embrace people that are newer to software. They always want the senior developers. They want the 10x-er or whatever those are. They want the people that have many, many years of experience. And there's so much value that comes from mentoring the next group of developers. And it's incredibly frustrating to me that one, companies often aren't open to it. But honestly, more than that, as long as you're upfront and honest about like, hey, this is the team that we need right now to build what we're looking to build, I can get past that; I can understand that. But please don't then mislead people and say that you're a junior-friendly team, and then not be prepared.
I feel like some teams will go so far as to say, "Yes, we are junior-friendly," and they may even tweak their interview process to where it is a bit more junior-friendly. But then, by the time that person joins the team, they're really not prepared. They don't have an onboarding plan. They don't have a mentorship plan. And then they fail that person because that person has worked hard to get there. And they've worked hard to bring that person onto the team, but then they don't have a plan from there.
And I've seen it too many times. And it just frustrates me so much because it puts that junior person in such a vulnerable state where they really have to be an incredible self-advocate to then overcome those hurdles from a lack of preparation on that company's part. Okay, I think that's my event. I'm sure I could vent about this a lot more, but I will cut it off there. That's the heart of it.
CHRIS: I do think, like, with anything else, it's something that we have to be intentional about. And so what you're saying of like, yeah, we're a junior-friendly company, but then you're just hiring folks, trying to find folks that may work at a slightly lower pay grade, and that's what that means to you. So like, no, no, that's not what this is. This needs to be something different. We need to have a structure and an organization that can support folks at different points in their career.
But it's interesting to me to think about the sort of why of it. And the earlier part of this conversation, we talked about some of the benefits that can come organizationally from it, and I do sincerely believe in that. But I also believe that it is fundamentally one of the best ways to find really talented people early on in their career and be in a position to hire them where maybe later on in their career, that just wouldn't happen naturally. And I've seen this play out in a number of organizations.
I went to Northeastern University for my college, and Northeastern is famous for the co-op program. Northeastern sounds really fancy. Now I learned that they have like a 7% acceptance rate for college applications right now, which is wildly low. When I went to Northeastern, it was not so fancy. So just in case anyone's hearing that and they're like, "Oh, Northeastern, wow." I'm not that fancy. [laughs]
But they did have the co-op then, and they still have it now. And the co-op really is a differentiating thing. You do three six-month rotations. And it is this fundamental differentiator in terms of when you're graduating. Particularly, I was in mechanical engineering. I came out, and I actually knew what that meant in the world. And I'd used Outlook, and I knew what a water cooler was and how to talk near one because that's a critical thing to learn in the world. And really transformative experience for me.
But also, a thing that I observed was many of my friends ended up working at companies that they had co-opted for. I'm one of those people. I would say more than 50% of my friends ended up with a position at a company that they had done a co-op rotation with. And it really worked out fantastically. That organization and the individual got to try things out, experience. And then, I ended up staying at that company for a number of years, and it was a wonderful experience. But I don't know that I would have ended up there otherwise. That's not necessarily the way that would have played out.
And similarly like, thoughtbot has the apprenticeship. And I have seen so many wonderful developers start at that very early point in their career. And there was this wonderful structure around them joining the thoughtbot team, intentional, structured, supported. And then those folks went on to be some of the most talented developers that I've ever worked with at a wonderfully talented organization. And so the story of like, you should do this, organizations. This is a thing that you should invest in for yourself, not just for the individual, like, for both. Everybody wins in this case, in my mind.
I will say, though, in terms of transparency, I currently manage a team of three developers. And we hired very intentionally for senior folks this early on in where we're at. And that was an intentional choice because I do believe that if you're going to be hiring more junior developers, that needs to be something that you do very intentionally, that you have a support structure in place, that you're able to invest the time in where they're at and make sure we have sort of...
I think a larger team makes more sense to bring juniors into broadly. That's the thing that I'm saying out loud that I'm like, I should push on that a little bit. Is that true? Do I really believe that? But I think so, my actions obviously point to it. But it is an interesting trade-off space of how do you think about that? My hope is that as we grow as an organization, that we would then very intentionally start hiring folks in a more junior, mid-level to junior and be very intentional about how we support them, bring them into the organization, et cetera. I do believe it is a win-win situation for everyone when done with intention and with focus.
STEPH: That's such an interesting bit that you just said because I very much appreciate when companies recognize do we have the bandwidth to support someone that's more junior? Because at thoughtbot, we go through periods where we don't have our apprenticeship that's open because we recognize we're not in a place that we can support someone. And we don't want to bring someone in unless we can help them be successful. I very much admire that and appreciate that about companies when they can perform that self-assessment.
I am so intrigued. You'd mentioned being a smaller team. So you more intentionally hire senior developers. And I think that also makes sense because then you need to build up who's going to be in that mentorship pool? Because then people could leave, people could take vacations, and so then you need to have that support system in place. But yeah, I don't know what that then perfect balance is. It's like, okay, so then as soon as you have like five people available to mentor or interested in mentorship, it's like, then do you start bringing in the conversation of like, let's bring in someone that we can help build up and help them be successful and join our team? And I don't know what that magical number is.
I do think it's important for teams to reflect to say, "Can we take on someone that's junior?" All the benefits of having someone that's junior. And then just being very honest and then having a plan for once that junior person does arrive. What does their career path look like while they've joined that team, and who's going to be that person that's going to help them level up? So not only make that choice upfront of yes, we are bringing someone on but let's also think about like the first six months of their work here at the company and what that's going to look like.
It feels like an important step that a lot of companies fail to do. And I think that's why there are so many articles that then are like, hey, if you're a junior dev, here's all the things that you should do to be the best junior dev. That's fabulous. And we're constantly shoring up junior devs to be like, hey, here's all the things that you need to be great at. But we don't have as many conversations around; hey, here's all the things that your manager or the rest of your team should be great at to then support you equally as you are also doing your best to meet them. Like, they need to meet you halfway.
And I'm not completely unsympathetic to the plight; I understand. It's often where I've seen with teams the more senior developers that have very strong mentorship communication skills are then also the teammates that get pulled into all the meetings and all the different projects, so then they are less available to be that mentor. And then that's how this often fails. So I don't think anybody is going into this intentionally, but yet, it's what happens for when someone is new and joining a team, and it hasn't been determined the next six months what that person's onboarding and career path looks like.
Circling back just a bit, there's the question around, can juniors start with a remote team? I can go first. And I'm going to say unequivocally yes. There's no reason a junior can't start with a remote team. Because all the things that I feel strongly about come down to how is your team going to plan for this person? And how are they going to support this person? And all the benefits that you get from being in an office with a team, I think those do exist.
And frankly, for someone like myself, it can be easier to establish a bond with someone that you get to see each day, get to see in person. You can walk up to their desk and can say, "Hey, I've got a question for you." But I think all those benefits just need to be transferred into a remote-friendly way. So I think it does ratchet up how intentional you have to be with your team and then onboarding a junior developer. But I absolutely think it's doable, and we should do it.
CHRIS: You went with unequivocally yes as your answer. I'm going to go with a qualified maybe as my answer. I want this to be true, and I think it can be true. But I think it takes all the more intentionality than even what we've been describing. To shift the question around a little bit, what does remote work mean? It doesn't just mean we're doing the work, but we're separate. I think remote work inherently is at its best when we also are largely async first. And so that means more structured writing.
The nature of the conversation tends to be more well-formed in each interaction. So it's like I read a big document, and then I pass it over to you. And at your leisure, you respond to it with a bunch of notes, and then it comes back to me. And I think that mode of interaction, while absolutely wonderful and something that I love, I think it fits really well when you're a little bit further on in your career when you understand things a little bit better. And I think the dance of conversation is more useful earlier on and so forth.
And so, for someone who's newer to a team, I think having the ability to ask a quick question over and over is really useful to someone who's early on in their career. And remote, again, I think it's at its best when it's async. And those two are sort of at odds. And so it's that mild tension that gives me pause of like, something that I think that makes remote work great I do think is at least a hurdle that you would have to get over in supporting someone who's a little bit newer. Because I want to be deeply present for someone who's newer to their journey so that they can ask a lot of questions so that I am available to be interrupted regularly.
I loved at thoughtbot sitting next to someone and being their mentor and being like, yeah, anytime you want, just tap on my desk. If I got my headphones on, that doesn't mean I'm ignoring you; it means I just need to make the sounds go away for a minute because that's the only way my brain will work. But feel free to just tap on my desk or whatever and grab my attention for a moment. And I'm available for that. That's an intentional choice. That's breaking up my continuity of the day, but we're choosing that for a reason.
I think that's just a little harder to do in a remote context and all the more so if we're saying, hey, we're going to try this async thing where we write structured documents, and we communicate in these larger, more well-formed, communicates back and forth. But I do believe it can be done. I think it should be done. I just think it's all the harder for all of those reasons.
STEPH: I agree that definitely makes it harder. But I'm going to push a little bit and say that when you mentioned being deeply present, I think we can be deeply present with someone and be remote. We can reduce the async requirements. So if you are someone that is more senior or more accustomed to the team, you can fall back to more of those async ways to communicate.
But if someone is new, and needs more mentorship, then let's just set up time where we're going to literally hang out for a couple of hours each day or whatever pairing environment works best for them because pairing can also be exhausting. But hey, we're going to have a check-in each day; maybe we close out each day and touchpoint. And feel free to still message me and interrupt me. Like, you're going to just heighten your availability, even though it is remote. And be aware, like, hey, this person could message me at more times, and I'm okay with that. I have opted into this form of communication.
So I think we just take that mindset of, hey, there's this person next to me, and I'm their mentor to like, hey, they're not next to me, but I'm still their mentor, and I'm still here for them. So I agree that it's harder. I think it falls on us and the team and the mentors to change ourselves versus saying to juniors, "Hey, sorry, it's remote. That's not going to work for you." It totally works for them. It's us, the mentors, that need to figure out how to make it work.
I will say being on that mentor side that then not being able to see someone so if they are next to me, I can pick up on body language and facial expressions, and I can tell when somebody's stuck. And I can see that they're frustrated, or I can see that now's a good time for me to just be like, "Hey, how's it going? What are you working on? Or do you need help with something?" And I don't have that insight when I'm away. So there are real challenges like that that I don't know how to address.
I have gone the obnoxious route [laughs] where I just message people, and I'm like, "Hey, how's it going? How's it going? How's it going?" And I try not to do that too much. But I haven't found a better way to manage that other than to constantly check in because I do have less feedback from that person that I'm working with unless they are just incredibly open about sharing when they're stuck. But typically, when you're newer to a team or newer to a career, you're going to be less willing to share when you're stuck.
But yeah, there are some real challenges, but I still think it's something for us to figure out. Because otherwise, if we cut off access for remote teams to junior folks, I mean, that's where we're headed. There are so many companies and jobs that are headed remote that not being junior friendly and being remote in my mind is just not an option. It's something that we need to figure out. And it's hard, but we need to figure it out.
CHRIS: Yeah, 100% on we need to figure that out and that that's on us as the people managing and structuring and bringing folks into teams. I think my stance would be like, let's just be clear that this is hard. It takes effort to make sure that we've provided a structure in which someone newer to a team can be successful. It takes all the more effort to do so in a remote context, I think. And it's that recognition that I think is critical.
Because if we go into this with the wrong mindset, it's like, oh yeah, it's great. We got this new person on the team. And yeah, they should be ready to go in like two weeks, right? It's like, no, no, this is a different thing. We need to be very clear about it. This is going to require that we have someone who is able to work with them and support them in this. And that means that that person's output will likely be a little bit reduced for the period of time that we're talking about. But we're playing a long game here. Let's make sure we're clear on that. This is intentional.
And let's be clear, the world of hiring and software right now it's not like super easy. There aren't way more software developers than there are jobs; at least, that's been my experience. So this is something absolutely worth investing in for just core business reasons and also good for people. So hey, it's a win-win. Let's do it. Let's figure it out. But also, let's be clear that it's going to be a little tricky along the way. So, you know, let's be intentional about that. But yeah, obviously do it, got to do it.
STEPH: Wait, so I feel like we might have circled back to unequivocally yes. [laughs] Have we gotten there, or are you still on the fence?
CHRIS: I was unequivocally yes from the beginning, but I couched it in, but...yeah, I said other things. You're right. I have now come around; let's say to unequivocally yes.
STEPH: [laughs] Cool. I don't want to feel like I'm forcing you to agree with me. [laughs] But I mean, we just so rarely disagree. So we've either got to identify this as something that we disagree on, which would be one of those rare occasions like beer and Pop-Tarts.
CHRIS: A watershed moment. Beer and Pop-Tarts.
STEPH: Yeah, those are the only two so far.
[laughter]
CHRIS: Not together also. I just want to go on record beer and Pop-Tarts; I don't think would be...anyway.
STEPH: Ooh, I don't know. It could work. It could work.
CHRIS: Well, there's another thing we disagree on.
STEPH: I would not turn it down. If I was eating a Pop-Tart, and you're like, "Hey, you want a beer?" I'd be like, "Sure," vice versa. I'm drinking a beer. "Hey, you want a Pop-Tart?" "Totally."
CHRIS: Okay. Well yeah, if I'm making bad decisions, I'm obviously going to chain them together, but that doesn't mean that they're a good decision. It's just a chain of bad decisions.
STEPH: I feel like one true thing I know about you is that when you make a decision, you're going to lean into it. So like, this is why you are all about if you're going to have a Pop-Tart, you're going to have the highest sugary junk content Pop-Tart possible. So it makes sense to me.
CHRIS: It's the Mountain Dew theorem, yeah.
STEPH: I didn't know this had a theorem. The Mountain Dew theorem?
CHRIS: No, that's just my name. Well, yeah, if I'm going to drink soda, I'm going to drink Mountain Dew, the nonsense nuclear option of soda. So yeah, I guess you're describing me, although as you say it back to me, I suddenly feel very, like, oh God, is this who I am as a person? [laughs] And I'm not going to say you're wrong. I'm just going to spend a little while thinking about some stuff.
STEPH: I mean, you embrace it. I think that's lovely. You know what you want. It's like, all right, let's do this. Let's go all in.
CHRIS: Thank you for finding a wonderfully positive way to frame it here at the end. But I think on that note, should we wrap up?
STEPH: Let's wrap up.
CHRIS: The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at hosts@bikeshed.fm via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.Sponsored By:BuildPulse: Flaky tests take the joy out of programming. You push up some code, wait for the tests to run, and the build fails because of a test that has nothing to do with your change. So you click rebuild and you wait. Again. And you hope you're lucky enough to get a passing build this time.
Flaky tests slow everyone down, break your flow and make things downright miserable.
In a perfect world, tests would only break if there's a legitimate problem that would impact production. They'd fail immediately and consistently, not intermittently. But the world's not perfect, and flaky tests will happen, and you don't have time to fix them all today. So how do you know where to start?
BuildPulse automatically detects and tracks your team's flaky tests. Better still, it pinpoints the ones that are disrupting your team the most. With this list of top offenders, you'll know exactly where to focus your effort for maximum impact on making your builds more stable. In fact, the team at Codecademy was able to identify their flakiest tests with BuildPulse in just a few days. By focusing on those tests first, they reduced their flaky builds by more than 68% in less than a month!
And you can do the same because BuildPulse integrates with the tools you're already using. It supports all the major CI systems, including CircleCI, GitHub Actions, Jenkins, and others. And it analyzes test results for all popular test frameworks and programming languages, like RSpec, Jest, Go, pytest, PHPUnit, and more.
So stop letting flaky tests slow you down. Start your 14-day free trial of BuildPulse today. To learn more, visit buildpulse.io/bikeshed. That's buildpulse.io/bikeshed.Support The Bike Shed

May 10, 2022 • 44min
337: Oh, Henry
We've got a tricycle anniversary! 🥳 Will it be ruined by a cockroach?
Steph shares an update regarding some of the progress and discoveries that she's helped make with a client in regards to speeding up CI.
Chris is finally getting a little bit more back into the code at work and finds himself riding another time management struggle bus. P.S.: Who even names these apps?!?!
Children of Time
Maker's Schedule, Manager's Schedule
The Backwards Brain Bicycle - Smarter Every Day 133
Clockwise - Time Management For Teams
One month on Analog
Getting Things Done
Bullet Journal
Become a Sponsor of The Bike Shed!
Transcript:
STEPH: I have officially started recording. You are on the mic, friend.
CHRIS: This is on the mic. Oh goodness.
Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey.
STEPH: And I'm Steph Viccari.
CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, normally, I would say, "What's new in your world?" But this week, this day, in fact, is a very special day. Actually, technically, it's tomorrow. But did you happen to know that you have achieved your tricycle anniversary here on The Bike Shed?
STEPH: No. [laughs]
CHRIS: Three years. Three years ago tomorrow, Episode 196:I Can Be Wrong on the Internet, wonderful title for it, was released. That was the first episode where you were formally a co-host. You'd come on a few times before that, but that was three years ago.
STEPH: That's incredible. Man, you totally got me. [laughs] You were switching it up for our intro, and our intro is very formalized. As you've said before, it is per your contract that’s how we do our intro. [laughs]
CHRIS: Yes.
STEPH: That is incredible. Three years. Wow. You know, I had thought about not the particular anniversary, but I was chatting with the Boost team earlier because I'm always encouraging people like, "Hey, write a blog post. What you just said sounds incredible. That would be wonderful as a blog post." And so I felt the need to convey like, I'm terrible at writing blog posts. I have written a grand total of two. I have a third one that's in draft state and has been that way for a long time, at least a month, I believe.
And so, I am not great about writing and publishing blog posts. But I was like, but I could podcast. And so I looked up, and I was like, I know I've done around over 100; I think around 140 episodes. And so I was like, that makes me feel better. Those who can't write podcast. [laughs]
CHRIS: I'm with you on that front, that she can just keep editing a blog post for forever. I actually do have some stats that I gathered for this as well. So like you said, you're close to 140 episodes. Let's assume an average of 40 minutes per. That gets us to around 5,000 minutes of audio or said differently; that's like 87 hours or 3.6 days.
STEPH: Whoa.
CHRIS: I know, right?
STEPH: The hours really hit home for me. [laughs]
CHRIS: Not the days? I like the days one.
STEPH: No. The hours one, I don't know, the hours one resonates with me. That is something. That's very cool.
CHRIS: 87 hours, yeah. Hopefully, in that time, I think we've said some useful things.
STEPH: I was going to say as someone who started out as your co-host, and I was really certain I had nothing to say, I have 87 hours that documents otherwise. [chuckles]
CHRIS: Indeed. And having been on the other side of the mic from you for most, if not the vast majority of those, you have wonderful things to say. It has been such an absolute pleasure getting to share the mic with you and talk about tech, and nonsense, and life, and all of the things. So yeah, thanks for coming on this adventure.
STEPH: Well, that's very sweet. Thank you so much. And I appreciate you. And it has been amazing. It's been so fun to be your co-host. So I'm really glad that you convinced me to come on this adventure with you. Speaking of adventures, I have a very silly one that I'm going to start us out with. I want to tell you about Henry. You don't know Henry, so I'm going to introduce you to who Henry is.
So we have moved into our new place in North Carolina, and I went to our bathroom to brush my teeth and get ready for bed. And when I turned on the water, out popped this giant cockroach that just came scurrying across the sink, and I panicked. And so I went flying out of the bathroom and hopped up on the bed because I'm an adult, and that's what you do when you encounter a cockroach. And I was like, okay, it's just a cockroach, calm down, which is funny. Like, cockroaches and spiders, I can't do. Snakes and mice totally cool with; I can handle them. But cockroaches and spiders are my fear.
So then I was like, well, maybe if I name him or if I named them, this cockroach, then that will help, and I will be less scared of them. So I have named this cockroach Henry. And so now, when I go into the bathroom, I will often tell Henry like, "Hey, I need you to vacate. I'm coming in." And I reached over…it was a day or two later I reached over to get some soap, and then out pops Henry.
Turns out naming them didn't help. I immediately ran out of the bathroom because [laughs] they are just so fast. Something moving that quickly just scares me. So I now have this ongoing battle with Henry. I think Henry is going to win, and I'm going to end up having to use the guest bathroom until Henry decides to vacate the home.
CHRIS: Oh, Henry. Well, Henry wins this game. But I like that you tried, though, giving it a name. I will say I've actually had an experience very similar to this, and it worked in my case. So there's a really fantastic book series that I've read called The Children of Time I believe is the first one. There's the Children of Time and then The Children of Ruin. And then there's a third one coming out. It's by a fantastic author, Adrian Tchaikovsky. The first book, The Children of Time, was recommended to me by another former thoughtboter, Greg Fisher.
It's just such a unique book. It's about spiders; just it's super-duper about spiders. Whenever I tell anyone this, they're like, "I don't like spiders." I'm like, "Trust me; I'm not a spider guy. That's not who I am." And yet reading this book, these spiders they've got personality. It's just fantastic. This author has such a unique voice and really does such a fantastic job of bringing to life a different type of intelligence, a different sort of point of view on the world and the universe, spiders specifically. And I found myself thinking about spiders differently.
In the book, there are a handful of names for spiders. They're not specific characters in the book. It's almost like the names get reused for different representative spiders, which I realize I'm doing a terrible job of describing it. Everyone should read this book. It's utterly fantastic. But I found myself I would see a spider out in the world, and I'd be like, "Oh, that's Fabian. Yeah, no, Fabian is our friend," literally, this happened.
There was a large spider that was living out on our deck, and I was like, oh, no, no, no, we have to protect Fabian. This is Fabian's spot now. Like, he's just chilling. He's killing some bugs that we don't want. Like, Fabian is our friend. And so this totally worked for me with that book. But I had to go to that level; just giving it a name would not have been enough. They needed to have a backstory, and a history, and all of that. But yeah, again, cannot recommend these books enough.
STEPH: I really appreciate that I'm not alone in this approach and that it resonated with you as well. Okay. All right. So next, I will work on a backstory for Henry, and then maybe this will help. I also just need Henry to slow down. Henry is just…he's too fast. And so when he comes charging out of these hidden places, it scares me. [laughs] So I also need Henry to just be a little slower or a lot slower or to just go find another home. That would be great too.
CHRIS: Yeah, I'll be honest; I haven't read a book about sentient cockroaches. I feel like, I don't know, maybe I could come around. But I was surprised by what happened with spiders, so...
STEPH: Just the idea of that book, ugh. Okay, I'm going to move on. I'm going to move on. [laughs]
CHRIS: It's so good. You have to read it. Here's the thing, like that feeling, what if that feeling got to go away and you could think about spiders differently? Because here's the thing, spiders are friends. Spiders are on our team.
STEPH: Spiders I can handle. I'm into the idea of that book, but when you said a book about sentient cockroaches, that one just ugh. [laughs]
CHRIS: Gotcha. Okay. All right. Okay. Sure.
STEPH: So I appreciate you humoring my Henry story. But now, for my own sake, I'm going to move away from the topic of cockroaches. [laughs] That's what people came to listen to, right? So a story about roaches named Henry and spiders named Fabian.
On a more technical note, I have an update that I'm very excited to share in regards to some of the progress and discoveries that Joël and I have made with our current client in regards that we're looking to speed up CI, interested in adding some more machines to then also speed up how quickly the tests run.
And along that journey, we have also talked about tentpole. In terms of how we're splitting up and distributing that work right now, it's based per file. So for a tentpole, if you have a file that takes 10 minutes but all of your other files take 2 minutes to complete, then 10 minutes is the fastest that you're ever going to achieve because you have this one file that's holding everything else up.
So we have manually addressed a lot of those files by splitting them out. So just literally taking like this 10-minute file and splitting it into three or four files, whatever we need to bring it in line with the other files. And that had some really positive results theoretically.
So looking at the math, if we start with our current state, so we have 86 processors; this is on one machine. So on one machine, we have 86 processors with the presence of tentpoles, so we haven't manually split any files. It takes around 14 minutes for all of the RSpec tests to run. So that's around 560 minutes total of work that is then being distributed across these 86 processors.
In theory, if we addressed all of those tentpoles and we could bring all the files down where they only take six and a half minutes to complete so everything is equal in that regard, we should be able to have the RSpec test complete in around eight and a half minutes. So essentially, the math behind that is we have 560 minutes' worth of work. We divide that by 86 CPUs. We added two minutes because there's going to be a little bit of like boot-up time and just maybe some other noise that's there. So then that brings us to around eight and a half minutes. The problem is we haven't seen that. We haven't gotten that low when we actually run the RSpec test suite.
So we're seeing around more like 11 minutes. So it currently takes around 14 minutes. We've manually split files, and we're seeing more consistently that things are taking to 11 and 11 and a half minutes. And so we really paused because our next step is we want to automate how we address these tentpole files where we don't want to have to manually split files. That is just too brittle. It's very easy for someone to add to that file, and then suddenly it spikes back up, and it becomes the file that's holding up everything else.
It's also really hard to split these files. I'm not going to go there, but this is one of my other gripes with shared examples. These files can be difficult to split because you don't exactly know what's getting run, and then trying to chunk them into different files is tedious and not an easy task. So we really don't want people to have to do that either or introduce metrics. We don't really want to do that, although we would around like, "Hey, you have now introduced some tests to this file. And it now exceeds the threshold that we're looking for in terms of how we want to make sure there aren't any tentpoles in the test suite."
So to avoid a lot of those concerns, ideas, then we automate. So there are some tools that we could use. parallel_tests is the tool that we're using right now, but that's per file. There's another gem that I'd mentioned before called parallel_split_test that we have taken a look at that will then chunk them at smaller levels. I'm still not exactly sure how because even though I declared that I'm looking into that gem, we've gotten distracted with some other work. But there's parallel_split_test. There are also some other approaches. There's RSpec Queue. There's Knapsack Pro that will then split tests more at the individual example level, so then you don't have a bottleneck as to how long the file is. We don't care anymore.
So we wanted to start looking into how we could automate using those tools. But now that we're not seeing the payoff from when we've done the manual splitting, we've now backed up to say, "Okay, before we invest in this automation, we really want to see this pay off first." So that's where we're at right now. And the mantra that has been in my mind as we're going through this is verify twice, automate once.
And we've realized that we're just not there yet when it comes to automating. So to help us verify, before we go into automation mode, we are tracking the data in terms of we see how the tests are actually getting distributed across all the CPUs. We can see which tests were run for each CPU, and then we can see across which CPU how much work each process was given.
And we're seeing that some of the processes are given more work, like maybe one process is given like nine minutes of work, but another one's only given like five minutes of work. But then, if we look at the individual tests that are being distributed, there's room. It's like, okay, well, why didn't parallel_tests then distribute some of these groups of tests over to this other CPU that only was given four minutes of work?
And we see there's an opportunity because not all of those tests...it's like, well, maybe it's too big. They couldn't bring it over to the other CPU because then that would have pushed it to 10 minutes or something like that. But that's not the case because some of the test files are short enough where it's maybe like 30 seconds or maybe a minute. So it's like, it seems like a really good candidate that it could have gotten shifted to a CPU that has less work.
The thing that we have discovered is we're analyzing two different things. So we are looking at the expectation. So we are providing to parallel_tests like, hey, in a previous run, this is how you split the test, or here's the runtime for every single file, so use this data to assign work to every CPU. Based on that data, then some of the CPUs weren't given more work because we expected that CPU to take longer to complete. But then, in reality, it took less time. So we're seeing that there is a variation in terms of how long a test file takes to run.
So based on historical data, let's say we have like a controller test, and based on historical data, it took four minutes to run. But then, in actuality, maybe it took suddenly eight minutes to run, or maybe it took like two minutes to run. So we have this level of variation in how long a test actually takes to run that we can't achieve a perfect distribution because it's fluctuating so much.
And that's why we're seeing in these graphs that we're creating around why one CPU has a lot more work than another CPU is because yes, based on historical data, this should have been distributed perfectly, but the actual runtime then of those tests has changed enough. That's why it looks like our distribution algorithm isn't working.
And one of the ways we confirmed this is Joël took the parallel RSpec runtime log that is being generated from running that test (So that's the historical data that we're providing that shows how long each test takes to run.) and ran that through the parallel_tests algorithm, and it was perfect. Parallel_tests did a beautiful job of assigning those to all of the CPU. So we realized it had to be some fluctuation in the actual runtime that's then causing this concern.
So I'm going to share some stats; a couple of these are a little scary stats where, for example, we have a test that could run anywhere between 6.4 seconds but has taken as long as 60 seconds to run. So when you have that level of variation in terms of how long the test takes to run, you can't begin to distribute the work perfectly because it's going to differ each time.
And so now we're looking to understand what's causing that variation. Maybe it's real network requests that are going out that are then causing that fluctuation in timing. Maybe it's database contention in terms of some of those files are just creating a lot of records, and then sometimes it runs quickly, and sometimes it goes more slowly if there are a lot of other tests that are also running and hitting the database at that same time. Who knows?
We have lots of interesting options to explore, but I know I've shared a lot. Hopefully, that painted a helpful picture as to some of the discoveries that we have made around how we're distributing this work. But yeah, I'm going to pause there.
CHRIS: Oof. Yeah, that's an adventure. Partly, I think you're just making a great sales pitch for Knapsack because my understanding is this is exactly what their point of view and approach is; like, don't try and think your way out of this problem, just try and distribute the work on demand. Just in time distribution sort of thing of like, you got a bunch of workers, and each worker pulls off the queue.
There's probably some intelligence to how to sort the queue like maybe you lead with feature specs or the things that have the highest variance put them earliest in the queue such that at the end, you're getting a very deterministic distribution such that you get nice and even performance across them.
But nonetheless, Knapsack Pro seems like a very good thing. I'm intrigued by it for our usage as well. But I'm super intrigued by something that can vary by 10X, 6 seconds versus 60 seconds is just so interesting. Like, are these features-specs first of all? Because that's the only thing that could make sense. If this is a model spec and this is happening, I'm like, wow, what? A feature spec, maybe. But even then, it's still surprising.
STEPH: So I haven't dug into the specifics of that file. There are some other files that also have some scary stats. So that is next on my list to figure out how that variation is taking place. I don't think it's a feature spec just from glancing at the title of the file. But I'm also intrigued.
CHRIS: Huh. Like, the feature specs is one standout that could make sense to me because, inherently, there is waiting. There are wait characteristics in feature specs. And so they can have you stack up enough of those waits for each of the sort of like find, assert, look for, fill in, et cetera. Each of those can have a couple of seconds attached to it. But other stuff is...even a feature spec; it's surprising to take that long. Anything else is super surprising.
A couple of things stood out to me otherwise on what you said; the verify twice, automate once I love that. The general sort of the follow-up, if nothing else, of did that thing that we thought might make a change actually do it? Is it impacting in the way that we expected? Are our assumptions being validated?
There's a wonderful blog post that I read a while back that just sort of named an idea that has stuck in my head; it was called Calling Your Shot with TDD. I think I'm misrepresenting the title. But it was something in that space where TDD as an approach is a way to be like, I think I have a mental model of how the system works, and I'm now going to write a test that constrains to that behavior, and then we go from there.
And TDD as a practice represents a certain sort of specific example of this. But what you're describing of like, well, we made this change. Did it produce the outcome? Or it's so easy to do this with performance optimizations of, oh yeah, an N+1 query. Let's fix that, and then the page will be better. And it's like, is it actually, though? In some cases, in most cases, fixing an N+1 will improve performance. But will it improve performance in a useful amount? Or is it the limiting factor in terms of the performance fixes?
And so having that habit and that muscle of having the hypothesis, making the change, and then verifying that you get the expected result is so, so critical. And just sort of going through that loop is so important. It's interesting to hear that you're like, we had some ideas. And then we tried some stuff, and we're not super-duper seeing what we expected is...yeah, you're doing the work there if nothing else.
STEPH: Yeah, we have found on this project that it's really important that, as you said, to call your shot in terms of like, what's the math telling us in terms of what we think is achievable versus what are the actual results that we're seeing? And then verifying it. And I feel a lot better because before, it was just like, we don't know why this isn't paying off. We thought that splitting up these files would totally pay off, and we would drop from 14 minutes to suddenly eight and a half minutes, and it was going to be beautiful. And we didn't get there. Why is that?
And so this journey has helped me feel a lot better in regards to we have more of an understanding as to why, and it's because we validated that parallel_tests is doing a great job in terms of distributing the work across each of the processes. But because there's that variance in the test files, the algorithm can't begin to then manage that. I mean, there's if you have your data kind of lies to you in terms of what you think is going to run versus what actually may run, then we're always going to have this poorly distribution of work.
And so then the next step is, do we actually dig into understanding more of like do we want to look at those files specifically and address that concern? Or do we want to go ahead and take that leap of faith over to Knapsack Pro? Because that's what paused us from moving on to something bigger to then automate how we distribute work
Maybe since we now understand why we didn't see the payoff, we feel more comfortable taking that leap of faith that we will still see the payoff once we hand this off to a different process. And we're actually breaking it up per test or per example versus per test file. We may see less variation. Maybe that's not true. Maybe there will actually be some examples that still have a high variation, but at least it won't be grouped as an entire file, so it may get a little better. There's also the fun idea...I'm going to categorize this under the good idea, bad idea area.
CHRIS: I think I go good idea, terrible idea, just to be clear [laughs] but.
STEPH: I like it. All right. Yeah, good idea, terrible idea. So for this, one of the ideas that came up was, what if we try to finagle some of this just to still see some of the reward? Because there's part of us that we just want to see it pay off. Like, we've put a lot of effort into this. We want to see something get faster.
And it's like, well, if we know there's some variance with these files, what if we had those files, like, we added to them in terms of instead of parallel_tests ever thinking that this test could take as few as six seconds, we just say, "Nope, we know this file is always going to take more somewhere in the middle," so to then improve our distribution. And that way, there's not such a high variance of maybe the historical data will show that it took 10 seconds, but then the next run it takes more like a full minute.
And in this case, we're just like, no, you always take a full minute, or you always take 30 seconds. That might even out some of the distribution. That was one of the fun ideas that came up in terms of how could we help improve the distribution? I don't think we'll actually do that because that seems like unnecessary work and then has to be managed and justified and documented somewhere. And there are concerns that go with that. But it was still fun to brainstorm of, like, we have this thing. How do we want to do a better job of distributing between the actual work, tricking it, and then moving to another service?
CHRIS: I vote I'm team tricking it just to be clear but nah, probably not. Looping back, there is one other thing that you said in there that really stands out to me is there's a moment as you're looking through all this data, like, A, again to highlight the important work of measuring and actually validating that your hypothesis and procedure and everything's doing what you expected.
But then there was that moment of huh, that's weird, which is that is like a quote. That is an idea. That is a feeling; huh, that's weird. And whether or not you pursue it is such an interesting space to be in because a lot of the time, the work that we're doing has little bits of huh, that's weird. And do you pursue it, or do you not? Do you ignore it? And you're like, oh, that bug report is interesting. That could be a gigantic problem in our system, or it could just be a noisy network connection. I don't know.
As I think about the spectrum between folks very early in their career and senior developer as we try and think about what that means, this is one of those spaces that becomes really interesting to me. How much stuff...like, how deep is your understanding of all of the different things that you're working on such that most of the time, you have a mental model, and then the system behaves roughly in accordance with your mental model?
I think very early on in your career; I remember when I was starting out, I was like, I don't know how anything works. This is fun. I type some stuff. It works some of the time, and this is great. And slowly, over time, my knowledge and experience grew such that most of the stuff that I was doing fits within my mental model.
And then every once in a while, there's those huh, that's weird moments, and do you pursue them or do you not? Do you put them on a list to follow up on later? Just, like, that judgment point becomes a really interesting variation of what you know and don't know. And so, just the way you described that was interesting to me because it reminded me of that sort of conceptual space.
STEPH: I like that a lot because, yeah, that's constantly the space that we're in. And one of the things that I find so interesting about these kinds of Rails rescue or working with legacy code, I mean, I'm sure they apply to newer projects too. But there's a goal that you have in mind. And when do you recognize that you need to shift the goal, or you need to stick even more diligently to that goal?
And we're in that space of where we're constantly reassessing; this is our goal to speed up CI. Do we need to break for a week or two to then improve some of these tests concerns that we see? Or is this one of those times where we need to ignore that and acknowledge that it's a thing and be glad that we know about it but then really stick to our goal of speeding up CI and then moving forward with pulling in a service like RSpec Queue or Knapsack? One of those. And I agree; I think that's incredibly interesting.
And I like having those conversations of how do you decide what's the next best goal or a path to pursue? In my case, I often use timeboxing as my way to get around it where it's one of those okay; we have this idea. So like, I would like to timebox a couple of hours to look at those files to see.
Because then I can collect more data to be like, how obvious is this as to why we have this fluctuation and how long it takes this test file to run? Is it because we're making a real network call? Is it that obvious? Or is it something that's a bit more murky, and it's going to take a lot more time for us to then triage?
That is typically the tool that I will use that if I'm still not sure between two decision points, I'm like, okay, well, let me timebox and collect a few more data as if I'm pursuing this direction and then come back in a couple of hours to then reconsider which path I want to go down. I will keep you up to date as to which way we pursue, and yeah, I'll let you know how it goes. But that's most of what's going on in my world. What's new in your world?
CHRIS: What's new in my world? This week I've been getting a little bit more back into the code. We've just got a bunch of work to do. And so I've been trying to move from more of the defining the work thinking about it into actually writing some code, as it were. And it has been harder than I expected, and I've been surprised by it. And specifically, what I mean by that is I have spent a lot of my time over the past handful of weeks more in the conversation, planning, negotiating, management-y type space. So we have a lot of different integrations that we're trying to work on.
So many of the things that I'm doing are having meetings with the external companies and talking through those integrations and what does that look like? And what features do we need? And what's the contract going to look like? Yeah, interesting, fun little things, maybe not my favorite stuff in the world, but that's fine. It needs to happen. It's critical work. But at this point, I'm now ready to move back in and actually start writing some code.
It was odd to me as I was struggling more than I expected to to get back into the code. And then it kind of clicked back in, and I was like, oh no, wait, the work was too nebulous, and I couldn't find an angle of attack. Where do I start with this big, amorphous...like, build the integration for system X. I was like, I don't know how to do that. And I was kind of like, you know, like a kid rolling around the peas on their plate and not actually eating any of them sort of thing. I was just like, well, or I could...
It was weird. It was almost like in a dream where your legs don't quite work. I'm using a lot of analogies today, but that's fine. You think you can run. You're certain you knew how to run at one point in your life, but your legs just won't work. It was a little bit of that. And then I sort of snapped back in, and I remembered. I was like, oh, just break it apart into little pieces, write a checklist, sort that checklist by the things that make sense to you, start with the first one, and then work through it. And it came back. But there was this very odd moment where I didn't know how to do the work anymore. And it was like, that was scary. Yeah, it was weird.
STEPH: I find it so heartwarming that someone who is as skilled as you are and experienced as you are that you still can have those moments of like, there's this really big task in front of me, and I'm not entirely sure how to do this. And then I love that you fell back to sort of that what's my systems approach and to like, how do I solve big, murky problems? And that is to start with creating a list of some of the things that I know to do to move this forward and then organizing those. I love that story.
CHRIS: It's funny you mentioned timeboxing a minute ago, and I was like, yeah, right. That's another of the tactics that I use. And there's this whole toolset that exists, but it exists largely for me in the implementation side of the work that I do. And the other side, the conversations, and the planning and all of that, they really feel like these very distinct spaces. There's the Maker Versus Manager Schedule article by Paul Graham.
And really, it's interesting to me to experience how I'm moving across what feels like a wider gap. Like, I've bounced from the frontend to the backend a lot, or from I'll do some product management planning sort of stuff, and then I'll get back into the work. And somehow, that has all felt more cohesive and consistent.
And yet the nature of this work is sort of I'm finding that as I go back and forth between the two different sides, too, is also probably reductive. And there are probably like nine different ways in which this thing gets sliced up. But as I'm bouncing between the different facets of my work, it's been trickier. And so it was useful to just recognize that and to recognize the fact that I was able to click back in.
There's a really fantastic video on YouTube. It's by...I believe the title of the channel is Smarter Every Day. It's this guy Destin who talks about different ideas and mechanical things and whatnot. But at one point, he built a bike that was backwards. And specifically, the way that it was backwards is your handlebars; when you turn your handlebars to the right, your front wheel turns to the right, when you turn to the left, your wheel goes to the left. So it's a very direct connection.
He made a bike that was reversed such that when he turned the handlebars to the right, the front wheel would turn to the left. And obviously, he could not ride this bike. This is an impossible bike to ride for a person who has learned to ride a bike under the normal circumstances. But he battled, and he fought, and eventually, he tricked his brain into learning how to ride this new bike.
But then he couldn't ride a regular bike. And there's actually a video of it. He describes the experience of trying to ride a regular bike and how it became this other foreign thing. And yet, at one point, his brain just clicked back in, and suddenly, he knew how to ride a bike again. And the people that were watching him thought he was pretending or something; really fantastic video.
And it speaks to this sort of thing. There are modes of thinking and ways that you're operating, and it sort of felt like that. If you watch this video and you go to the end where he's in Amsterdam, and he has to try and ride a regular bike, and it clicks back in, that's what work kind of felt like for me this week. So I'm going to stop stacking analogies on top of each other, but that's sort of where I'm at right now. In a way, it's been fun.
STEPH: Well, to be fair, your analogy of pushing the peas around on the plate, I could just see it. [laughs] I think that was a really good analogy for me that really resonated. I loved that one. I haven't seen that video. That makes a lot of sense to me. I think it does. I don't know. If someone was like, "Can you ride a bike with backwards handlebars?" I probably would have been like, "Sure," and then totally failed.
I can't recall if we've talked about this before, but in everything that you're sharing, it made me think about the context switching in regards to how my schedule has changed where before, I have my one on ones, and then I also have client work. And then I was interstitialing a lot of those one on ones with client work. But now I have three days that are dedicated to client work, and I have one day that is dedicated to the Boost team, and then I have Friday for investment days. And that has been huge for me.
I didn't realize how exhausting it was for me when I was switching context so much because then there's also some prep work and a little after work that goes with each one on one. And then just knowing that I had it and I had to make sure I budgeted time for that each day in addition to the client work. But then once I shifted to like, I have a day to just focus on this particular...like, my brain can click in to like, this is the mode I'm in. I am totally focused on my team and being a good team lead and having one on ones versus I am totally focused on I can just work on code and work on some of those gnarly problems.
That has been a really big shift for me and something that I just can't unsee now to realize how stressful it was before and how I feel like I wasn't doing as good of a job. But now, I feel really good at the end of each day that I was in a particular mode, and I was more productive because I was focused on that mode.
CHRIS: Yeah, absolutely. The phrase "click in" there I love that as a mental-physical sort of representation of the thing. There's a friend of ours and former guest on the show, Matt Sumner, has recommended a piece of software to me a few times, which is called Clockwise. And Clockwise does an interesting thing that I feel calm. I've not yet pursued it because I think there's sort of a...well, anyway, the thing that it does is it rearranges schedules to try and push meetings together such that you have larger gaps of heads down deep work focused time. I love that idea. I absolutely love it.
But I think it's really interesting to like; I believe very strongly as a manager in not rescheduling one on ones with my teammates. I want to make sure that that time is protected, that to them, it's very clear that we have this time. This is the space that we've carved out to have these sorts of conversations. I'm more okay with them switching it on me. But I think it's very important for me to not change that out on them or to not reschedule at the last minute.
And so I'm sensitive to just juggling around the meetings. But I love the idea of this thing of let's just try and squash all the meetings together. I'm happy to just have like three hours straight. I'm in that mode. That's the I'm thinking about people and process and all of that kind of stuff. And then I break, I have lunch, and I come back in the afternoon, and the afternoon is entirely clear. And that's heads down working time or vice versa.
I'm actually more of I would like to have my mornings entirely clear, and that's where I do my heads down thinking work, precious brain, all that kind of stuff. And then in the afternoon, I'd prefer all my meetings. I don't necessarily want the world to entirely go around my preferences for meetings, but if it happened, it'd be fine. But Clockwise is a really interesting sort of technical solution to this problem that I've yet to pursue, but I'm intrigued by.
STEPH: I'm intrigued by this too because I did this just today where I was going through, and I was updating my schedule. And I do this on my own. I am my own AI in this case where I'm thinking through, like, okay, I want to stack meetings together so that I don't have these awkward like 15-30 minute breaks, and then I still have more of a big chunk of focus time. And so, I am manually doing that for my schedule. And I would be intrigued to see what software would recommend...they could show me a pattern that I hadn't considered that works better for me than the version that I have.
The flip side is I've also learned to just be really good with, like, I have 10 minutes. Well, let's look at my to-do list, and what can I push along for 10 minutes? That is the other thing that having a tight schedule has helped me get better at is where even if I only have 10 minutes, before, I might have been like, oh, that's not enough time to do anything. Totally a lie. Ten minutes is great. Ten minutes you can totally take a look at something and then make a comment, or read it, or just have a little more context and nudge it along. I love the nudge it along approach versus the I have to sit down and get it done approach.
On this particular theme of context switching and productivity, I have a question for you. I was debating as to whether I was going to share it or not because I feel like it's still hand-wavy enough. I'm not sure I'm going to do a great job asking this question, but I'm going to go for it. I am looking for a way to manage not just the things that I have to do each day but some higher goals. So I really like the idea of themes. So I love when a week has a theme, a month has a theme. These are the things that I'm focused on.
So then, when I do have like these 10-15 minutes or this focus time, I know there's a particular theme that I'm pursuing, maybe it's more technical related, maybe it's more mentorship, or it's something I'm interested in pursuing. But I know it's going to take a couple of iterations to work on. And I haven't found a really good way to capture those themes.
Right now, if I have something like I know I'm meeting with someone on Friday and we have a goal that we're going to collect some examples of this topic, so then what I am currently doing is for my calendar is I'm setting a daily reminder each morning to be like, hey, just so this is like simmering in the back of your mind, don't forget about it. Collect some examples about this topic. So it's one of those if I happen to see something; I want to be able to grab it and remind myself, like, hey, you're looking for this.
But it's been okay. I haven't loved it. And so I'm just in that space of where I'm trying to find a way of how do I capture the theme that I'm working on for a week or for a month but still keep that in line with my to-do items and my calendar and still ideally keep it all together? I don't want to have so many disparate places I have to go look to understand all the things that I'm focused on. Do you have any thoughts? Do you have a system for how you manage or even think about things in that space?
CHRIS: Oof. You've opened Pandora's Box here.
STEPH: [laughs]
CHRIS: I have some thoughts. What you asked, I think, is an incredibly deep question or one that there's no singular answer to this sort of thing. And the answer is specific to the person, and it's an evolving thing. Like for me, I have explored this space, personal productivity, and how to think about the bigger goals and all of that. I've explored it a lot. And it's evolved, and each phase of my life has a slightly different answer to how I think about this.
Also, to be clear, sometimes I say stuff, and it sounds like I know what I'm talking about or I've thought about this. And I'm like; I got it; I do not have it. This is a I do not have this one on lock. I'm constantly trying to solve this problem. I think the first thing that I'd go to is Getting Things Done book on personal productivity. It's the most sort of impactful or foundational in my thinking about how to look at this. And in particular, it has some ideas around the different levels at which we think about our work. There's like the day-to-day actions, and there are projects and areas.
And it's a little bit formal, frankly, in my opinion. But it does introduce the idea of the weekly review. And I think that structure that's one of the things that has been true for me throughout all of the different variations of tools, and approaches, and productivity whatnots. But the weekly review being a really useful time to sort of take a step back and think about things at a slightly higher level to make sure that you're staying connected to bigger goals and whatnot.
The other thing that comes to mind as you say this is Dave Rupert, another person who has been a guest on the show, has written a couple of times about his analog productivity system. So there's this...I think Ugmonk is the name of the cards, if I'm remembering correctly. But they're little index cards, basically, and you basically rewrite them every day. And it's this very manual, almost meditative, but very focused practice. It's vaguely similar to bullet journaling, which is another approach.
But each of them have this structured way in which you look at the work that you're doing. And I think there's a good opportunity in both of those systems, either the analog productivity thing that Dave Rupert does or bullet journaling to be like, this is where my goals go, and so these are the goals for the week, and they're written at the top of the page, and then everything else goes underneath that.
But you always have top of mind and very visible the goals that you're going towards. And so it's things like that that have been my answer to this of like, I need to find somewhere within the system that I'm working in to have the overarching goals be present and accounted for. That's tricky. And analog actually seems to be a really great way to do it, just like pen and paper is a great solution.
So even if you're using a system like Todoist, then maybe your daily structure is written on a piece of paper such that at the top of it, you write those things that are true for the week, or they're on your iPhone, desktop, or whatever, that's not a thing. They're in like a widget on your iPhone screen or on your computer desktop or something like that. But you keep them top of mind. You find some way to do that such that you're constantly anchored to the things that you say or the big rocks that you want to fill the sand around.
STEPH: I really like that book, Getting Things Done. I read it a long time ago, so that would be fun to revisit and see if I get any new bits of knowledge that are helpful for me. I like the idea of that more manual task of writing things down. I have found that to be very helpful for me because I am someone that I really like to have as little screen time as possible.
So if I can have my to-do list away from my screen, that's really nice. But then I also just recognize that there's a nicety to having it stored in an app. So then that way, it is shared across devices, and I can see it at any point. And it's stored somewhere, and I don't have to try to reread my messy handwriting. There are benefits.
But I think you highlighted the thing that I'm looking for, which is in Todoist or something similar. Right now, I have these discrete action items, and I would love if at the top...I've done this with Trello boards before where a team is working on a particular experiment or trying something new that for that iteration, we make a ticket and then we will label it something, some bright, pretty color, and then just keep it at the top of to-do and then each day we walk the board. And it's a friendly reminder of, like, hey, this is our theme for this iteration. Here's a friendly reminder.
I would love something that's like that where it's like, hey, this is your theme for this week, or this is your theme for the month; here's a friendly reminder. And I think I'm going to see if there's a way I can do that with Todoist to keep things on the same space even though I don't think it's really built to support something like that. But I'm going to check it out.
And there is a boards feature in Todoist that I haven't leveraged. So maybe if I, instead of doing the ordered list view if I do the boards view, then I could do what I just said with Trello in terms where I have a card that stays there for the week and reminds me of a goal that I'm working on or a theme that I'm working on. Cool. That was helpful, thank you. But yeah, I've been in that space of trying to figure out how to capture goals. So I appreciate you sharing those ideas with me.
CHRIS: Oh, I'm always happy to talk about this. If anything, I've been trying to be somewhat reserved so that every episode of the show isn't me talking about my continuous search for a new productivity tool. I'm still in Things. I'm not super happy about it. I keep looking at TickTick. I want Todoist to work, but it doesn't. OmniFocus calls to me.
I have a note on my phone with the list of features that I want. And I keep telling myself over and over, you're not allowed to write your own software for this. And thus far, I have successfully avoided once again writing my own productivity and list management software. [laughs] But I don't know how long I can hold out.
STEPH: As you list the names for the different apps you're using, like, what was the first one?
CHRIS: Things.
STEPH: Things. Thank you.
CHRIS: OmniFocus, TickTick. What else? There's plenty more that I've looked at, [laughs] Todoist. Yeah.
STEPH: Yeah. As you're listing all of those, that reminds me that I have decided that I think people who name apps and startups are also the same people that name baby items because I've joined a mother's group. So another thoughtboter, Elaina, runs a really wonderful group of where moms get together once a month, and we just chat about all the mama things. And they were helping supply some recommendations. They were like, "Do you know what stuff you need to buy?" And I'm like, "No. Please, please tell me. What am I going to need?"
But they're having this conversation around like, "Oh, you've got to get the Björn bouncer, and the SNOO, and the Cuzzle Wuzzle, and the Bippity Bop, and the like...all these things. I'm like, "I'm going to need y'all to use different terms because I have no idea what you're talking about." [laughs] And that is also, I think...yeah, that also goes with people who are naming things like TickTick and Things, naming those apps.
CHRIS: I'm also going to need you to spell them because many of these are not phonetic or broadly, the English languages and phonetic, the BabyBjörn, you know, that sort of thing. To do but T-E-D-E-A-U-X, I think, is one of them. It's like, come on, what are we doing here? [laughs] So yes, it's complicated out there.
STEPH: All I can think of is anytime someone's like, "Come on," all I can think of is that Peter Griffin clip where he's like, "Come on," and he's trying to get people to agree. I feel like that's some of the...that's my reaction when I read some of these [laughs] or some of those names where it's like, you're just trying to trip me up. But yeah, startups and naming baby items.
CHRIS: That's what this podcast is about.
STEPH: That's what it's all about, and cockroaches and spiders. [laughs] And I'm going to stop myself. On that note, shall we wrap up?
CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at hosts@bikeshed.fm via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.Support The Bike Shed

May 3, 2022 • 35min
336: Million Dollar Password
Chris came up with a mnemonic device: Fn-Delete – for when he really wants to delete something and is also thinking about password complexity requirements, which leads to an exciting discussion around security theater.
Steph talks about the upcoming RailsConf and the not-in-person option for virtual attendees. She also gives a shoutout to the Ruby Weekly newsletter for being awesome.
NIST Password Standards
3 ActiveRecord Mistakes That Slow Down Rails Apps: Count, Where and Present
Difference between count, length and size in an association with ActiveRecord
Ruby Weekly
Railsconf 2022
Become a Sponsor of The Bike Shed!
Transcript:
STEPH: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari.
CHRIS: And I'm Chris Toomey.
STEPH: And together, we're here to share a bit of what we've learned along the way. So hey, Chris, happy Friday. You know, each time I do that, I can't resist the urge to say happy Friday, but then I realize people aren't listening on a Friday. So happy day to anyone that's listening. What's new in your world, friend?
CHRIS: I'm going to be honest; you threw me for a loop there. [laughs] I think it was the most recent episode where we talked about my very specific...[laughs] it's a lovely Friday, that's true. There's sun and clouds. Those are true things. But yeah, what's new in my world? [laughs] I can do this. I can focus. I got this.
Actually, I have one thing. So this is going to be, I'm going to say vaguely selfish, but I have this thing that I've been trying to commit into my brain for a long time, and I just can't get it to stick. So today, I came up with like a mnemonic device for it. And I'm going to share it on The Bike Shed because maybe it'll be useful for other people. And then hopefully, in quote, unquote, "teaching it," I will deeply learn it.
So the thing that happens in my world is occasionally, I want to delete a URL from Chrome's autocomplete. To be more specific, because it's easier for people to run away with that idea, it's The Weather Channel. I do not like weather.com. I try to type weather often, and I just want Google to show me the little, very quick pop-up thing there. I don't want any ads. I don't want to deal with that.
But somehow, often, weather.com ends up in my results. I somehow accidentally click on it. It just gets auto-populated, and then that's the first thing that happens whenever I type weather into the Omnibox in Chrome. And I get unhappy, and I deal with it for a while, then eventually I'm like, you know what? I'm deleting it. I'm getting it out of there. And then I try and remember whatever magical key combination it is that allows you to delete an entry from the drop-down list there. And I know it's a weird combination of like, Command-Shift-Alt-Delete, Backspace, something.
And every single time, it's the same. I'm like, I know it's weird, but let me try this one. How about that one? How about that one? I feel like I try every possible combination. It's like when you try and plug in a USB drive, and you're like, well, it's this way. No, it's the other way. Well, there are only two options, and I've already tried two things. How can I not have gotten it yet? But I got it now.
Okay, so on a Mac specifically, the key sequence is Shift-Function-Delete. So the way I'm going to remember this is Function is abbreviated on the keyboard as Fn. So that can be like I'm swearing, like, I'm very angry about this. And then Shift is the way to uppercase something like you're shouting. So I just really need to Fn-Delete this. So that's how I'm going to remember it. Now I've shared it with everyone else, and hopefully, some other folks can get utility out of that. But really, I hope that I remember it now that I've tried to boil it down to a memorable thing.
STEPH: [laughs] It's definitely memorable. I'm now going to remember just that I need to Fn-Delete this. And I'm not going to remember what it all is tied to. [laughs]
CHRIS: That is the power of a mnemonic device. Yeah.
STEPH: Like, I know this is useful in some way, but I can't remember what it is. But yeah, that's wonderful. I love it. That's something that I haven't had to do in a long time, and I hadn't thought about. I need to do that more. Because you're right, especially changing projects or things like that, there are just some URLs that I don't need cached anymore; I don't want auto-completed. So yeah, okay. I just need to Fn-Delete it. I'll remember it. Here we go. I'm speaking this into the universe, so it'll be true.
CHRIS: Just Fn-Delete it.
STEPH: Your bit about the USB and always getting it wrong, you get it 50-50 [laughs] by getting it wrong, resonates so deeply with me and my capability with directions where I am just terrible whether I have to go right or left. My inner compass is going to get it wrong. And I've even tried to trick myself where I'm like, okay, I know I'm always wrong. So what if I do the opposite of what Stephanie would do? And it's still somehow wrong. [laughs]
CHRIS: Somehow, your brain compensates and is like, oh, I know that we're going to do that. So let's...yeah, it's amazing the way these things happen.
STEPH: Yep. I don't understand it. I've tried to trick the software, but I haven't figured out the right way. I should probably just learn and get better at directions. But here we are. Here we are.
CHRIS: You just loosely referred to the software, but I think you're referring to the Steph software when you say that.
STEPH: Yes. Oh yeah, Steph software totally. You got it. [laughs]
CHRIS: Gotcha. Cool. Glad that I checked in on that because that's great. But shifting gears to something a little bit deeper in the technical space, this past week, we've been thinking about passwords within our organization at Sagewell. And we're trying to decide what we want to do. We had an initial card that came through and actually got most of the way to implemented to dial up our password strictness requirements. And as I saw that come through, I was like, oh, wait, actually, I would love to talk about this.
And so we had the work that was coming through the PR that had been opened was a pretty traditional set of let's introduce some requirements on our passwords for complexity, so let's make it longer. We're going from; I think six was the default that Devise shipped with, so we're increasing that to, I think it was eight. And then let's say that it needs a number, and a special character, and an uppercase letter or something like that.
I've recently read the NIST rules, so the National Institute of Standards and Technology, I think, is what they are. But they're the ones who define a set of rules around this or guidelines. But I think they are...I don't know if they are laws or what at this point. But they tell you, "This is what you should and shouldn't do." And I know that the password complexity stuff is on the don't do that list these days. So I was like, this is interesting, and then I wanted to follow through.
Interestingly, right now, I've got the Trello boards up for The Bike Shed right now. But as a result, I can't look at the linked Trello card that is on the workboards because they're in different accounts. And Trello really has made my life more difficult than I wanted. But I'm going to pull this up elsewhere. So let's see.
So NIST stuff, just to talk through that, we can include a link in the show notes to a nice summary. But what are the NIST password requirements? Eight character minimum, that's great. Change passwords only if there is evidence of a compromise. Screen new passwords against a list of known compromised passwords. That's a really interesting one. Skip password hints, limit the number of failed authentication attempts. These all sound great to me.
The maximum password length should be at least 64 characters, so don't constrain how much someone can put in. If they want to have a very long password, let them go for it. Don't have any sort of required rotation. Allow copy and pasting or functionality that allows for password managers.
And allow the use of all printable ASCII characters as well as all Unicode characters, including emojis. And that one really caught my attention. I was like, that sounds fun. I wish I could look at all the passwords in our database. I obviously can't because they're salted and encrypted, and hashed, and all those sorts of things where I'm like, I wonder if anybody's using emojis. I'm pretty sure we would just support it. But I'm kind of intrigued.
STEPH: You said something in that list that caught my attention, and I just want to see if I heard it correctly. So you said only offer change password if compromised? Does that mean I can't just change my password if I want to?
CHRIS: Sorry. Yeah, I think the phrasing here might be a little bit odd. So it's essentially a different way to phrase this requirement is don't require rotation of passwords every six or whatever months. Forgotten password that's still a reasonable thing to have in your application, probably a necessity in most applications. But don't auto-rotate passwords, so don't say, "Your password has expired after six months."
STEPH: Got it. Okay, cool. That makes sense. Then the emojis, oh no, it's like, I mean, I use a password manager now, and thanks to several years ago where he shamed me into using one. Thank you. That was great. [laughs]
CHRIS: I hope it was friendly shame, but yeah.
STEPH: Yes, it was friendly; kind shame if that sounds like a weird sentence to say. But yes, it was a very positive change. And I can't go back now that I have a password manager in my life. Because yeah, now I'm thinking like, if I had emojis, I'd be like, oh great, now I have to think about how I was feeling at the time that then I introduced a new password. Was I happy? Was I angry? Is it a poop emoji? Is unicorn? What is it? [laughs] So that feels complicated and novel.
You also mentioned on that list that going for more complexity in terms of you have to have uppercase; you have to have a particular symbol, things like that are not on the recommended list. And I didn't know that. I'm so accustomed to that being requirements for passwords and the idea of how we create something that is secure and less easy to guess or to essentially hack. So I'm curious about that one if you know any more details about it as to why that's not the standard anymore.
CHRIS: Yeah, I think I have some ideas around it. My understanding is mostly that introducing the password complexity requirements while intended to prevent people from using very common things like names or their user name or things like that, it's like, no, no, no, you can't because we've now constrained the system in that way. It tends in practice to lead to people having a variety of passwords that they forget all the time, and then they're using the forgotten password flow more often.
And it basically, for human and behavior reasons, increases the threat surface area because it means that they're not able to use...say someone has a password scheme in mind where it's like, well, my passwords are, you know, it's this common base, and then some number of things specific to the site. It's like, oh no, no, we require three special characters, so it's like they can't do their thing. And now they have to write it down on a Post-it Note because they're not going to remember it otherwise. Or there are a variety of ways in which those complexity requirements lead to behavior that's actually less useful.
STEPH: Okay, so it's the Post-it Note threat vector that we have to be worried about. [laughs]
CHRIS: Which is a very real threat factor.
STEPH: I believe it. [laughs] Yes, I know people that keep lists of passwords on paper near their desk. [laughs] This is a thing.
CHRIS: Yep, yep, yep. The other thing that's interesting is, as you think about it, password complexity requirements technically reduce the overall combinatoric space that the passwords can exist in. Because imagine that you're a password hacker, and you're like, I have no idea what this password is. All I have is an encrypted hashed salted value, and I'm trying to crack it. And so you know the algorithm, you know how many passes, you know potentially the salt because often that is available. I think it has to be available now that I think about that out loud.
But so you've got all these pieces, and you're like, I don't know, now it's time to guess. So what's a good guess of a password? And so if you know the minimum number of characters is eight and, the maximum is 12 because that actually happens on a lot of systems, that's actually not a huge combinatoric space. And then if you say, oh, and it has to have a number, and it has to have an uppercase letter, and it has to have a special character, you're just reducing the number of possible options in that space.
And so, although this is more like a mathematical thing, but in my mind, I'm like, yeah, wait, that actually makes things less secure because now there are fewer passwords to check because they don't meet the complexity requirements. So you don't even have to try them if you're trying to brute-force crack a password.
STEPH: Yeah, you make a really good point that I hadn't really thought about because I've definitely seen those sites that, yeah, constrain you in terms of like, has to have a minimum, has to have a maximum, and I hadn't really considered the fact that they are constraining it and then reducing the values that it could be. I am curious, though, because then it doesn't feel right to have no limit in terms of, like, you don't want people then just spamming your sign up and then putting something awful in there that has a ridiculous length. So do you have any thoughts on that and providing some sort of length requirement or length maximum?
CHRIS: Yeah, I think the idea is don't prevent someone who wants to put in a long passphrase, like, let them do that. But there is, the NIST guidelines specifically say 64 characters. Devise out of the box is 128, I believe. I don't think we tweaked that, and that's what we're at right now. So you can write an old-style tweet and that can be your password if that's what you want to do. But there is an upper limit to that. So there is a reasonable upper limit, but it should be very permissive to anyone who's like, I want to crank it up.
STEPH: Cool. Cool. Yeah, I just wanted to validate that; yeah, having an upper bound is still important.
CHRIS: Yeah, definitely. Important...it's more for implementation and our database having a reasonable size and those sorts of things. Although at the end of the day, the thing that we saw is the encrypted password. So I don't know if bcrypt would run slower on a giant body of text versus a couple of characters; that might be the impact. So it would be speed as opposed to storage space because you always end up with a fixed-length hash of the same length, as far as I understand it.
But yeah, it's interesting little trade-offs like that where the complexity requirements do a good job of forcing people to not use very obvious things like password. Password does not fit nearly any complexity requirements. But we're going to try and deal with that in a different way. We don't want to try and prevent you from using password by saying you must use an uppercase letter and a special character and things that make real passwords harder as well. But it is an interesting trade-off because, technically, you're making the crackability easier. So it gets into the human and the technical and the interplay between them.
Thinking about it somewhat differently as well, there's all this stuff about you should salt your passwords, then you should hash them. You should run them through a good password hashing algorithm. So we're using bcrypt right now because I believe that's the default that Devise ships with. I've heard good things about Argon2; I think is the name of the new cool kid on the block in terms of password hashing. That whole world is very interesting to me, but at the end of the day, we can just go with Devise's defaults, and I'll feel pretty good about that and have a reasonable cost factor. Those all seem like smart things.
But then, as we start to think about the complexity requirements and especially as we start to interact with an audience like Sagewell's demographics where we're working with seniors who are perhaps less tech native, less familiar, we want to reduce the complexity there in terms of them thinking of and remembering their passwords. And so, rather than having those complexity requirements, which I think can do a good job but still make stuff harder, and how do you communicate the failure modes, et cetera, et cetera, we're switching it.
And the things that we're introducing are we have increased the minimum length, so we're up to eight characters now, which is NIST's low-end recommended, so it's between 8 and 128 characters. We are capturing anytime a I forgot password reset attempt happens and the outcome of it. So we're storing those now in the database, and we're showing them to the admins.
So our admin team can see if password reset attempts have happened and if they were successful. That feels like good information to keep around. Technically, we could get it from the logs, but that's deeply hidden away and only really accessible to the developers. So we're now surfacing that information because it feels like a particularly pertinent thing for us.
We've introduced Rack::Attack. So we're throttling those attempts, and if someone tries to just brute force through that credential stuffing, as the terminology goes, we will lock them out so either based on IP address or the account that they're trying to log into. We also have Devise's lockable module enabled. So if someone tries to log in a bunch of times and fails, their account will go into a locked state, and then an admin can unlock it. But it gives us a little more control there. So a bunch of those are already in place.
The new one, this is the one that I'm most excited about, is we're going to introduce Have I Been Pwned? And so, they have an API. We can hit it. It's a really interesting model as to how do we ask if a password has been compromised without giving them the password? And it turns out there's this fun sort of cryptographic handshake thing that happens. K-anonymity is apparently the mechanism or the underpinning technology or idea.
Anyway, it's super cool; I'm excited to build it. It's going to be fun. But the idea there is rather than saying, "Don't use a password that might not be secure," it's, "Hey, we actually definitively know that your password has been cracked and is available in plaintext on the internet, so we're not going to let you use that one."
STEPH: And that's part of the signup flow as to where you would catch that?
CHRIS: So we're going to introduce on both signup and sign-in because a password can be compromised after a user signs up for our system. So we want to have it at any point. Obviously, we do not keep their plaintext password, so we can't do this retroactively. We can only do it at the point in time that they are either signing up or signing in because that's when we do have access to the password. We otherwise throw it away and keep only the hashed value. But we'll probably introduce it at both points.
And the interesting thing is communicating this failure mode is really tricky. Like, "Hey, your password is cracked, not like here, not on our site, no, we're fine. Well, you should probably change your password. So here's what it means, there's actually this database that's called Have I Been Pwned? Don't worry; it's good, though. It's P-W-N-E-D. But that's fine." That's too many words to put on a page. I can't even say it here in a podcast.
And so what we're likely to do initially is instrument it such that our admin team will get a notification and can see that a user's password has been compromised. At that point, we will reach out to them and then, using the magic of human conversation, try and actually communicate that and help them understand the ramifications, what they should do, et cetera. Longer-term, we may find a way to build up an FAQ page that describes it and then say, "Feel free to reach out if you have questions." But we want to start with the higher touch approach, so that's where we're at.
STEPH: I love it. I love that you dove into how to explain this to people as well because I was just thinking, like, this is complicated, and you're going to freak people out in panic. But you want them to take action but not panic. Well, I don't know, maybe they should panic a little bit. [laughs]
CHRIS: They should panic just the right amount.
STEPH: Right.[laughs] So I like the starting with the more manual process of reaching out to people because then you can find out more, like, how did people react to this? What kind of questions did they ask? And then collect that data and then turn that into an FAQ page. Just, well done.
CHRIS: We haven't quite done it yet. But I am very happy with the collection of ideas that we've come to here. We have a security firm that we're working with as well. And so I had my weekly meeting with them, and I was like, "Oh yeah, we also thought about passwords a bunch, and here's what we came up with." And I was very happy that they were like, "Yeah, that sounds like a good set." I was like, "Cool. All right, I feel good." I'm very happy that we're getting to do this.
And there's an interesting sort of interplay between security theater and real security. And security theater, just to explain the phrase if anyone's unfamiliar with it, is things that look like security, so, you know, big green lock up in the top-left corner of the URL bar. That actually doesn't mean anything historically or now. But it really looks like it's very secure, right? Or password complexity requirements make you think, oh, this must be a very secure site. But for reasons, that actually doesn't necessarily prove that at all.
And so we tried to find the balance of what are the things that obviously demonstrate our considerations around security to the user? At the end of the day, what are the things that actually will help protect our users? That's what I really care about. But occasionally, you got to play the security theater game. Every other financial institution on the internet kind of looks and feels a certain way in how they deal with passwords.
And so will a user look at our seemingly laxer requirements or laxer approach to passwords and judge us for that and consider us less secure despite the fact that behind the scenes look at all the fun stuff we're doing for you? But it's an interesting question and interesting trade-off that we're going to have to spend time with. We may end up with the complexity requirements despite the fact that I would really rather we didn't. But it may be the sort of thing that there is not a good way to communicate the thought and decision-making process that led us to where we're at and the other things that we're doing.
And so we're like, fine, we just got to put them in and try and do a great job and make that as usable of an experience as possible because usability is, I think, one of the things that suffers there. You didn't do one of the things on the list, or like, it's green for each of the ones that you did, but it's red for the one that you didn't. And your password and your password confirmation don't match, and you can't paste...it's very easy to make this wildly complex for users.
STEPH: Security theater is a phrase that I don't think I've used, but the way you're describing it, I really like. And I have a solution for you: underneath the password where you have "We don't partake in security theater, and we don't have all the other fancy requirements that you may have seen floating around the internet and here's why," and then just drop a link to the episode. And, you know, people can come here and listen. It'll totally be great. It won't annoy anyone at all. [laughs]
CHRIS: And it'll start, and they'll hear me yelling about Fn-Delete that weather.com URL.
[laughter]
STEPH: Okay, maybe fast forward then to the part about --
CHRIS: Drop them to the timestamp. That makes sense. Yep. Yep.
STEPH: Mm-hmm. Mm-hmm. [laughs]
CHRIS: I like it. I think that's what we should do, yeah. Most features on the app should have a link to a Bike Shed episode. That feels true.
STEPH: Excellent Easter egg. I'm into it. But yeah, I like all the thoughtfulness that y'all have put into this because I haven't had to think about passwords in this level of detail. And then also, yeah, switching over to when things start to change and start to move away, you're right; there's still that we need to help people then become comfortable with this new way and let them know that this is just as secure if not more secure. But then there's already been that standard that has been set for your expectations, and then how do you help people along that path? So yeah, seems like y'all have a lot of really great thoughtfulness going into it.
CHRIS: Well, thank you. Yeah, it's frankly been a lot of fun. I really like thinking in this space. It's a fun sort of almost hobby that happens to align very well with my profession sort of thing. Actually, oh, I have one other idea that we're not going to do, but this is something that I've had in the back of my mind for a long time.
So when we use bcrypt or Devise uses bcrypt under the hood, one of the things that it configures is the cost factor, which I believe is just the number of times that the password plus the salts and whatnot is run through the bcrypt algorithm. The idea there is you want it to be computationally difficult, and so by doing it multiple times, you increase that difficulty.
But what I'd love is instead of thinking of it in terms of an arbitrary cost factor which I think is 12, like, I don't know what 12 means. I want to know it, in terms of dollars, how much would it cost to, like dollars and cents, to crack a password. Because, in theory, you can distribute this across any number of EC2 instances that you spin up. The idea of cracking a password that's a very map-reducible type problem.
So let's assume that you can infinitely scale up compute on-demand; how much would it cost in dollars to break this password? And I feel like there's an answer. Like, I want that number to be like a million dollars. But as EC2 costs go down over time, I want to hold that line. I want to be like, a million dollars is the line that we want to have. And so, as EC2 prices go down, we need to increase our bcrypt cost factor over time to adjust for that and maintain the million dollar per password cracking sort of high bar. That's the dream.
Swapping out the cost factor is actually really difficult. I've looked into it, and you have to like double encrypt and do weird stuff. So for a bunch of reasons, I haven't done this, but I just like that idea. Let's pin this to $1 value. And then, from there, decisions naturally flow out of it. But it's so much more of a real thing. A million dollars, I know what that means; 12, I don't know what 12 means.
STEPH: A million-dollar password, I like it. I feel like --
CHRIS: We named the episode.
STEPH: I was going to say that's a perfect title, A Million-Dollar Password. [laughs]
CHRIS: A Million-Dollar Password. But with that wonderful episode naming cap there, I think I'm done rambling about passwords. What's up in your world, Steph?
STEPH: One of the things that I've been chatting with folks lately is RailsConf is coming up; it's May 17 through the 19th. And it's been sort of like that casual conversation of like, "Hey, are you going? Are you going? Who's going? It's going to be great." And as people have asked like, "Are you going?" And I'm always like, "No, I'm not going." But then I popped on to the RailsConf website today because I was just curious. I wanted to see the schedule and the talks that are being given.
And I keep forgetting that there's the in-person version, but there's also the home edition. And I was like, oh, I could go, I could do this. [laughs] And I just forget that that is something that is just more common now for conferences where you can attend them virtually, and that is just really neat. So I started looking a little more closely at the talks. And I'm really excited because we have a number of thoughtboters that are giving a talk at RailsConf this year.
So there's a talk being given by Fernando Perales that's called Open the Gate a Little: Strategies to Protect and Share Data. There's also a talk being given by Joël Quenneville: Your Test Suite is Making Too Many Database Calls. I'm very excited; just that one is near and dear to my heart, given the current client experiences that I'm having. And then there's another one from someone who just joined thoughtbot, Christopher "Aji" Slater, Your TDD Treasure Map.
So we'll be sure to include a link to those for anyone that's curious. But it's a stellar lineup. I mean, I'm always impressed with RailsConf talks. But this one, in particular, has me very excited. Do you have any plans for RailsConf? Do you typically wait for them to come out later and then watch them, or what's your MO?
CHRIS: Historically, I've tended to watch the conference recordings after the fact. I went one year. I actually met Christopher "Aji" Slater at that very RailsConf that I went to, and I believe Joël Quenneville was speaking at that one. So lots of everything old is new again. But yeah, I think I'll probably catch it after the fact in this case.
I'd love to go back in person at some point because I really do like the in-person thing. I'm thrilled that there is the remote option as well. But for me personally, the hallway track and hanging out and meeting folks is a very exciting part. So that's probably the mode that I would go with in the future. But I think, for now, I'm probably just going to watch some talks as they come out.
STEPH: Yeah, that's typically what I've done in the past, too, is I kind of wait for things to come out, and then I go through and make a list of the ones that I want to watch, and then, you know, I can make popcorn at home. It's delightful. I can just get cozy and have an evening of RailsConf talks. That's what normal people do on Friday nights, right? That's totally normal. [laughs]
CHRIS: I mean, yeah, maybe not the popcorn part.
STEPH: No popcorn?
CHRIS: But not that I'm opposed to popcorn just —-
STEPH: Brussels sprouts? What do you need? [laughs]
CHRIS: Yeah, Brussels sprouts, that's what it is. Just sitting there eating handfuls of Brussels sprouts watching Ruby conference talks.
STEPH: [laughs]
CHRIS: I do love Brussels sprouts, just to throw it out there. I don't want it to be out in the ether that I don't like them. I got an air fryer, and so I can air fry Brussels sprouts. And they're delicious. I mean, I like them regardless. But that is a really fantastic way to cook them at home. So I'm a big fan.
STEPH: All right, I'm moving you into the category of fancy friends, fancy friends with an air fryer.
CHRIS: I wasn't already in your category of fancy friends?
STEPH: [laughs] I didn't think you'd take it that way. I'm sorry to break it to you.
[laughter]
CHRIS: I'm actually a little hurt that I'm now in the category of fancy friends. It makes a lot of sense that I wasn't there before. So I'll just deal with...yeah, it's fine. I'm fine.
STEPH: It's a weird rubric that I'm running over here. Pivoting away quickly, so I don't have to explain the categorization for fancy friends, I saw something in the Ruby Weekly Newsletter that had just come out. And it's one of those that I see surface every so often, and I feel like it's a nice reminder because I know it's something that even I tend to forget. And so I thought it'd be fun just to resurface it here. And then, we can also provide a link to the wonderful blog post that's written by Benito Serna.
And it's the difference between count, length, and size and an association with ActiveRecord. So for folks that would love a refresher, so count, that's a method that's always going to perform a SQL count query. So even if the collection has already been loaded, then calling count is always going to execute a database query. So this is the one that's just like, watch out, avoid it. You're always going to hit your database when you use this one.
And then next is length. And so, length loads the whole collection into memory and then returns that length to the number of items in that collection. If the collection has been loaded, then it's not going to issue a database call. And then it's just still going to use...it's going to delegate to that Ruby length method and let you know how many records are in that collection. So that one is a little bit better because then that way, if it's already loaded, at least you're not going to have a database call.
And then next is the size method, which is just the one that's more highly recommended that you use because this one does have a nice safety net that is built-in because first, it's going to check if we need to perform a database call, if the records have been loaded or not. So if the collection has not been loaded, so we haven't executed a database query and stored the result, then size is going to perform a database query. Specifically, it's using that SQL count under the hood. And if the collection has been loaded, then a database call is not issued, and then going to use the Ruby length method to then return the number of records.
So it just helps you prevent unnecessary database calls. And it's the reason that that one is recommended over using count, which is going to always issue a call. And then also to avoid length where you can because it's going to load the whole collection into memory, and we want to avoid that. So it was a nice refresher. I'll be sure to include a link in the show notes.
But yeah, I find that I myself often forget about the difference in count and size. And so if I'm just in the console and I just want to know something, that I still reach for count. It is still a default for mine. But then, if I'm writing production code, then I will be more considered as to which one I'm using.
CHRIS: I feel like this is one of those that I've struggled to lock into my head, but as you're describing it right now, I think I've got, again, another mnemonic device that we can lock on to. So I know that SQL uses the keyword count, so count that's SQL definitely. Length I know that because I use that on other stuff. And so it's size that is different and therefore special. That all seems good. Cool, locking that in my brain along with Fn-Delete. I have two things that are now firmly locked in.
So you were just mentioning being in the console and working with this. And one of the things that I've noticed a lot with folks that are newer to ActiveRecord and the idea of relations and the fact that they're lazy, is that that concept is very hard to grasp when working in a console because at the console, they don't seem lazy.
The minute you type out user.where some clause, and the minute you type that and hit enter in the console, Ruby is going to do its normal thing, which is like, okay, cool, I want to...I forget what it is that IRB or any of the REPLs are going to do, but it's either inspect or to_s or something like that. But it's looking for a representation that it can display in the console. And ActiveRecord relations will typically say like, "Oh, cool, you need the records now because you want to show it like an array because that's what inspect is doing under the hood."
So at the console, it looks like ActiveRecord is eager and will evaluate the query the minute you type it, but that's not true. And this is a critical thing that if you can think about it in that way and the fact that ActiveRecord relations are lazy and then take advantage of it, you can chain queries, you can build them up, you can break that apart. You can compose them together. There's really magical stuff that falls out of that.
But it's interesting because sort of like a Heisenberg where the minute you go to look at it in the REPL, it's like, oh, it is not lazy; it is eager. It evaluates it the minute I type the query. But that's not true; that's actually the REPL tricking you. I will often just throw a semicolon at the end of it because I'm like, I don't want to see all that noise. Just give me the relation. I want the relation, not the results of executing that query. So if you tack a semicolon at the end of the line, that tells Ruby not to print the thing, and then you're good to go from there.
STEPH: That's a great pro-tip. Yeah, I've forgotten about the semicolon. And I haven't been using that in my workflow as much. So I'm so glad you mentioned that. Yeah, I'm sure that's part of the thing that's added to my confusion around this, too, or something that has just taken me a while to lock it in as to which approach I want to use for when I'm querying data or for when I need to get a particular count, or length, or size. And by using all three, I'm just confusing myself more. So I should really just stick to using size.
There's also a fabulous article by Nate Berkopec that's titled Three ActiveRecord Mistakes That Slow Down Rails Apps. And he does a fabulous job of also talking about the differences of when to use size and then some of the benefits of when you might use count. The short version is that you can use count if you truly don't care about using any of those records. Like, you're not going to do anything with them. You don't need to load them, like; you truly just want to get a count. Then sure, because then you're issuing a database query, but then you're not going to then, in a view, very soon issue another database query to collect those records again. So he has some really great examples, and I'll be sure to include a link to his article as well.
Speaking of Ruby tidbits and kind of how this particular article about count, length, and size came across my view earlier today, Ruby Weekly is a wonderful newsletter. And I feel like I don't know if I've given them a shout-out. They do a wonderful job. So if you haven't yet checked out Ruby Weekly, I highly recommend it.
There are just always really great, interesting articles either about stuff that's a little bit more like cutting edge or things that are being released with newer versions, or they might be just really helpful tips around something that someone learned, like the difference between count, length, and size, and I really enjoy it. So I'll also be sure to include a link in the show notes for anyone that wants to check that out.
They also do something that I really appreciate where when you go to their website, you have the option to subscribe, but I am terrible about subscribing to stuff. So you can still click and check out the latest issue, which I really appreciate because then, that way, I don't feel obligated to subscribe, but I can still see the content.
CHRIS: Oh yeah. Ruby Weekly is fantastic. In fact, I think Peter Cooper is the person behind it, or Cooperpress as the company goes. And there is a whole slew of newsletters that they produce. So there's JavaScript Weekly, there's Ruby Weekly, there's Node Weekly, Golang Weekly, React Status, Postgres Weekly. There's a whole bunch of them. They're all equally fantastic, the same level of curation and intentional content and all those wonderful things. So I'm a big fan. I'm subscribed to a handful of them.
And just because I can't go an episode without mentioning inbox zero, if you are the sort of person that likes to defend the pristine nature of your email inbox, I highly recommend Feedbin and their ability to set up a special email address that you can use to then turn it into an RSS feed because that's magical. Actually, these ones might already have an RSS feed under the hood. But yeah, RSS is still alive. It's still out there. I love it. It's great. And that ends my thoughts on that matter.
STEPH: I have what I feel is a developer confession. I don't think I really appreciate RSS feeds. I know they're out there in the ether, and people love them. And I just have no emotion, no opinion attached to them. So one day, I think I need to enjoy the enrichment that is RSS feeds, or maybe I'll hate it. Who knows? I'm reserving judgment. Either way, I don't think I will. [laughs] But I don't want to box future Stephanie in.
CHRIS: Gotta maintain that freedom.
STEPH: On that note, shall we wrap up?
CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at hosts@bikeshed.fm via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeee!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.Support The Bike Shed