

The One Where We Geek Out on AI with Jennifer Moore of InfluxData
About our guest:
Jennifer Moore (she/her) is a staff software engineer at InfluxData with extensive experience in software development, devops, and testing.
Find our guest on:
Find us on:
- All of our social channels are on bento.me/geekingout
- Adriana’s X (Twitter)
- Adriana’s Mastodon
- Adriana’s LinkedIn
- Adriana’s Instagram
- Adriana’s Bluesky
Show Links:
- TI BASIC
- Quality Assurance (QA)
- GW BASIC
- Argo
- Firestore
- Database schema
- OpenTelemetry
- Bull Queue NodeJS
- OTel Collector
- Datadog agent
- Large Language Model (LLM)
- Artificial Intelligence (AI)
- N+1 query
- SkyNet
- Self-Driving Planes
- Self-Driving Cars
- Government legislation around AI
- Fediverse
- Letterbook Social
- Mastodon
- Ruby
- C-sharp
Transcript:
ADRIANA: Hey, y'all. Welcome to Geeking Out, the podcast about all geeky aspects of software delivery, DevOps, Observability, Reliability, and everything in between. I'm your host, Adriana Villela, coming to you from Toronto, Canada.
And geeking out with me today is Jennifer Moore. Welcome, Jennifer.
JENNIFER: Yeah, hi, thank you for having me.
ADRIANA: Yeah, super excited to have you on. So where are you calling from today? I am in the Dallas, Texas area. Oh, awesome. I think you're our first person from the South.
JENNIFER: Yeah, there are programmers outside of San Francisco.
ADRIANA: Nice. So to get warmed up, I want to get started with some lightning round questions that I like to ask all of my guests. So it's about six questions. It'll be fast and painless.
Okay, first question, are you a lefty or a righty?
JENNIFER: I am right-handed.
ADRIANA: All right. iPhone or Android?
JENNIFER: Android.
ADRIANA: Mac, Linux, or Windows?
JENNIFER: I guess, Windows? All of the above?
ADRIANA: All right, I think you're our first Windows person. Cool. Favorite programming language.
JENNIFER: C#.
ADRIANA: Cool. Dev or Ops.
JENNIFER: Was that Dev or Ops? Is DevOps the right answer? Because that's what I'm going to choose.
ADRIANA: Yeah, no wrong answers. So DevOps works. Yeah. When I asked this one to Hazel Weekly and she said, "Yes," so that counts too. Love it. Final question is, do you prefer to consume content through video or text?
JENNIFER: Oh, text usually.
ADRIANA: All right, cool. Well, that was it. Short and painless. All right, so let's get into the meaty bits. And I always like to ask my guests how they got started in tech. So what was your foray into tech?
JENNIFER: So tech is always what I liked as a child. I liked computers and I took some programming classes at summer camp school thing. That was a weird dynamic, but I did that a little bit and then my high school offered some programming classes. I took those and then I went to school to university for software engineering. And after I dropped out of university, I got a job in QA, moved very promptly into QA automation,
ADRIANA: Oh, that's awesome. So out of curiosity, what was your first programming language?
JENNIFER: My very first programming language...I don't know...If we're going back as far as I can, then like, TI Basic.
ADRIANA: Oh, nice.
JENNIFER: Yeah, I think the first one I spent any real time with was C.
ADRIANA: Yeah, my OG language was GW Basic, but I don't recall doing anything super damaging with it. So my real real one was QBasic. Good times. Good times. Cool. Okay, so more on the QA thing. What brought you on the path of QA initially in your career?
JENNIFER: Honestly, it was the job that I could get. It was 2009, so not like the best time to be job-hunting. And I had been looking for things for a while and I got an offer for QA role, so I took it.
ADRIANA: Nice. I got my professional start in QA as well. It was not the QA automation stuff. It was the clickety click, fill out Excel sheets, and sit and wait for the developers to fix bugs. So, yeah, I can definitely relate. And then after a while, I kind of begged my manager, "Please let me write code!"
So was QA automation kind of the natural thing for you? Because obviously, you're software-minded, so that felt like is that what led you to it from there? From like, QA?
JENNIFER: Yeah. I have some experience with software. I had done some part-time programming and obviously I was going to school for software engineering and my hiring manager at the time had a manual QA process, whatever, but was looking to set up an automation, I guess function for it. And so I think she saw my showing up as an opportunity and went from there.
ADRIANA: Oh, that's awesome. Yeah. I feel like QA automation is so developer-minded because we're so tired of I think developers are lazy and I see that as a perfectly awesome thing, right? We do not want to repeat things over and over and over again if there's a way that we can shortcut it through code. So I think it's so awesome that you took advantage of that opportunity and put the developer laziness to good use, right? Very cool. Now what kind of work are you doing now?
JENNIFER: So now I work at Influx Data. I'm a staff software engineer on what we call the deployments team, which is a little bit of an unusual charter but basically DevOps, like platform engineering work, and helping...The thing that we most definitely are responsible for is like our CI/CD pipelines, maintaining the good health of Argo and the automation that generates our Kubernetes manifests that we're going to deploy, and things like that.
As well as a lot of things around development tooling and some infrastructure work and a lot of whatever else comes up.
ADRIANA: So that sounds like a pretty good breadth of responsibilities. Because usually you see like, it feels like the description of your work is like a combination of what you would see in a DevOps team these days, plus an SRE team, plus a platform team all rolled into one.
JENNIFER: Yeah, and so we do have an SRE team and they take more of the infrastructure than we do, but we work very closely together because we're using a lot of the same tools and using those tools on a lot of the same things. And so that's a blurry line.
ADRIANA: Yeah. Cool. And so as part of your job, do you get to be on-call?
JENNIFER: I haven't been much in this role. I expect that I will eventually have some on-call responsibilities but the team is kind of in a rebuilding phase and so there's just been an understanding that we don't have enough people to support an on-call rotation and so we'll get to things like during our working hours.
ADRIANA: Got you. Now, how about in previous roles? Have you had a chance? Have you been on-call in previous roles? And what was that like?
JENNIFER: Yeah. So in my previous role...this was at a company called Screencastify. They make a screen recording product and I was leading the DevOps team there and I spent a lot of time on-call there. I think in particular, we were building kind of a v2 of the platform, which included migrating data from Firestore into a grown-up database with schemas, and that was going about like these data migrations do. And then we had some staffing disruptions where several people who were very senior and critical of that project resigned somewhat in protest of some management behavior and then the whole thing kind of collapsed and I was on-call to support that.
ADRIANA: Oh, yikes.
JENNIFER: That was a rough month.
ADRIANA: Oh my goodness.
JENNIFER: But it was just for a month. Um, I understand that the team is still dealing with, you know, like, the after-effects of that, but I'm not a part of it.
ADRIANA: Oh my goodness. That's got to be so super stressful in that situation. How do you deal with because it takes its toll on your mental health eventually, if not right away, given not just being on-call, but stresses of changes in your team. So how did you cope during that time?
JENNIFER: It I feel like I handled it pretty well. Like I was kind of, I don't know, like I had a sort of active, ongoing incident that I had to continuously respond to for a long time there. And so I kind of just had to do that. And I think having being able to do things that I could do in a self-directed way and things that were obviously important and necessary and I could just do them without having to go through a planning process that you would do for future work was actually kind of helpful for me in that regard. I could just put out fires and I didn't need to worry about the politics that had led up to that situation.
ADRIANA: Right, so you're kind of shielded or at least you're able to work kind of in a little bubble to shield yourself from some of the crap so that you could focus on the task at hand.
JENNIFER: Yeah. And there was stuff that I had been wanting to do and now there was this emergency and I took advantage of it and I put in a lot more tracing and monitoring and made some application changes to make the whole thing more observable in general. And that was nice. Getting to just do Observability and reliability work as its own dedicated priority was a really nice side effect of that otherwise unpleasant situation.
ADRIANA: Oh, that's so cool. Yeah, it's very interesting because I think a lot of times organizations, when they're embroiled in the royal dumpster fire of production shit storms, it's like such a reactive mode. But to be able to...I think it's so cool to be able to take advantage of a shitty situation and basically say, "No, I gotta do this so that we can improve the overall reliability of the system." Is really cool because I think many organizations would sort of almost not be in support of that, in spite of the fact that that's probably exactly what they would need to do.
JENNIFER: It's like unpleasant in the moment, but it is very powerful to be able to say that if you want the system to work at all, then we have to make it work reliably, because right now it just doesn't and it isn't. And that's the problem.
ADRIANA: And so how did folks like management and whatnot react once did they...Did they start to see the benefits, I hope, of all the wonderful things that you did around improving reliability and Observability?
JENNIFER: Um, I think so. So, like, my sort of direct management chain had been pretty on board with the notion of improving our Observability and making reliability a priority. And so I didn't have to fight very hard with my manager or my director. But then once you left the engineering organization, that was when that sort of broke down. And the rest of the organization was very...or, you know, the rest of executive team was very focused on features and deadlines and just delivering things that they could sell to customers. And they didn't view a reliable product as being on that list for some reason, which is always an extremely weird view to take to me, because if the product doesn't work, then even if you can sell it to someone, they're not going to keep paying for it. And so why? I don't know.
ADRIANA: Yeah, I totally agree with you. It's interesting, though, that you I mean, it's kind of shitty that you ended up being a situation where afterwards management didn't really see the value in what you were doing. But from a hindsight perspective, it's interesting to see at the same organization with leadership changes, what happens when you have leadership that's fully supporting this idea of, "Hey, let's make sure our systems are reliable."
So supporting an Observability and reliability culture versus an organization...within the same organization, it's change in leadership saying, "No, that's not our priority." It's an interesting experiment, and I'm sure aside from the obvious things that we would think, like, yeah, that's obviously not a great idea, but getting to experience firsthand what that was like, I'm sure must have been a very interesting and unique vantage point.
JENNIFER: I think it was definitely interesting. I very definitely learned a lot from it. I'm still kind of, I don't know, like synthesizing what that is so it would be hard to teach those lessons to someone else but I certainly came away with a lot of experience.
ADRIANA: Silver lining, then.
JENNIFER: Yeah.
ADRIANA: So out of curiosity, when you're talking about bringing Observability into the picture, what did you do in terms of Observability?
JENNIFER: Yeah, so like in that specific case, there were a couple of things. There was like the system had a main sort of application web server that would handle the bulk of talking to clients and ingesting video and things like that, and then a task system that would do some processing of video and analysis and things like that. And I could not see at all what was happening in that task system and I really needed to to understand what was going on with the whole system. And so I basically just stopped doing other things for a few days and wrote up a OTel instrumentation for the library that powered it, which is Bull Queue. So now there is one of those and I wrote it and I did so in anger.
ADRIANA: Angry coding. Awesome.
JENNIFER: Yeah. And so like that like that actually that was very helpful, probably in ways that my management did not appreciate because it illuminated a lot of areas where the problems were not occurring and the problem was basically database-related.
The question was like what is causing all the stress on the database? And the other thing I did was split up all of the database access into multiple different accounts so that I could actually tell the difference between whether traffic was coming from the web service or the task workers or some migration jobs or whatever else was happening. And between those two things I was able to develop a basic understanding of what the problem was.
ADRIANA: And then what did you use for visualizing your observability data? Did you guys use something that was like a SaaS product or just something that was hosted internally?
JENNIFER: Yeah, they had been using Datadog and so kept using it. That was a decision that was made before I joined the company and so that just was the one that we stuck with.
ADRIANA: Fair enough, but it did the job well enough, I guess, with the data that you were receiving.
JENNIFER: Yeah, I mean, we could definitely answer the questions that we were trying to answer once we started sending them the data that they would need to do that with.
ADRIANA: That's awesome. Do you know if anyone is still taking advantage of the Observability setup that you put in place?
JENNIFER: I understand that they are when I left, one of the things that I had been wanting to do or was actually starting to do was move to the OTel special Collector rather than Datadog's proprietary one so that we could experiment with different back ends and things. And it seems like that work continued because last I heard they had moved to Honeycomb, went off to Datadog.
ADRIANA: Interesting, that is very cool that the stuff you put in place continued, I'm sure that you feel really great about that, to have a little legacy.
JENNIFER: Yeah. I wish it was a happier legacy, but I am glad that it's helping the people who are still there's.
ADRIANA: Well, switching gears a bit, I know that when we were chatting earlier, you were talking about you had some thoughts around how engineers learn. So I was wondering if you could share a little bit more on that.
JENNIFER: Yeah, sure. So this is this kind of all like this thinking that I've been doing about it kind of comes out of a lot of the sort of public conversation that's happening around like AI and LLMs and such. They're used as developer tools. And I think one of the areas that doesn't get talked about enough in this regard is that the only things that those kinds of AI development tools are really good at doing are the same tasks that humans need to do in order to learn about the complex systems and programming and the systems that they work on. And so it's a little bit of technology eating its own seed corn here when we push these tools. Because it might be convenient for senior people who already have all of those knowledge and skills, but the next generation of engineers who we should be looking out for are just losing all of these opportunities to do really good basic learning work to computers that can't even really learn from it.
ADRIANA: Interesting. So you're saying that these AI tools are almost like hindering how we learn as a result?
JENNIFER: Um, yeah, kind of. I mean, I think that there like there is a lot of danger that those AI tools can take all of the good, you know, like all of the good learning tasks and, you know, I guess like jobs and roles that people would learn from.
ADRIANA: Right. Yeah, I think even to a certain extent, using I feel like there's a fine line, right, like, of like leaning too hard into AI and then but using it as an aid, right? For example, I've had in a couple of cases where I've written something down, I need to summarize I summarize something, but it needed to be like 300 words and I'm like, at 350, I'm like, Shit. Sometimes it can be really tricky, right? So popping it into ChatGPT and saying, "Hey, can you just make this fit into 300 words?" You've put the time and effort into writing it and then ChatGPT just takes it that extra little bit to just get you across to the finish line.
I feel like that's all right versus like, "Oh, Chat GPT, write me an entire story," and then you don't really have to think, research, whatever. It's kind of like this lost opportunity for learning because you're relying on it to do basically the whole thing for you and maybe you'll verify it, maybe not, I guess, depending on what kind of person is using the tool.
JENNIFER: Yeah, I think that is one of the actual uses of these kinds of LLM tools that makes sense to have the author of a document use it to produce a summary that they can verify the correctness of or to do some style transfer to make it sound like a business email instead of something you wrote at 4:00pm when you're trying to just leave the office. But that's not really how it gets...That's not the limit of how people advocate that they be used. And it instead gets posed as, like, an ops tool. And people are proposing that you should do AIOps and you'll have machines that will just scale your systems for you or whatever or tell you that you're running out of database connections and jump right to an answer which may or may not be the correct one.
And without letting any people go through the path of discovering what happened there. And if you're running out of connections because you have an N+1 query that is closing and opening new connections all the time, that's a very different problem than just like a runaway serverless system that is overloading your causing some sort of thundering herd problem for your database. And the AI probably can't tell you the difference. It probably doesn't know that there is a difference, but it sort of takes away the opportunity for people to do that investigation and learn what those things mean and what to do about it.
ADRIANA: Yeah, that's a really good point. And I think to a certain extent, those of us who have not, quote unquote, grown up around ChatGPT, I feel like we could be...or similar tools for that matter...I feel like it's almost like because of how we grew up in tech, it provides like a guardrail for us where we're still probably more inclined to still do the research. Trust but verify, not take it for face value. But for folks who are coming up in this era of these various AI tools, I think it becomes a lot more difficult because this is like sort of they're not as encouraged to put in that extra thought or put in that bit of creativity before handing it off to the AI tool, because that's not what they've been brought up with. And I feel like that can be very dangerous then to the younger generation as a result.
JENNIFER: Yeah, exactly. And so when you're like an experienced programmer and you're using ChatGPT to remind yourself what the syntax is for array map, that's a very different dynamic than inexperienced programmer who's using ChatGPT to create from whole cloth a function that will turn one array and do something to an array and give them different results.
And now that...they don't know, superficially, these look like the same thing, but there's just so much experience and context that the senior person brings, and I think that we all forget that not everyone has that. And you had to develop it somehow, and it wasn't by having a computer just hand you an answer.
ADRIANA: Yeah, exactly. It's those late night sessions with like StackOverflow open on different tabs, trying to figure out what is the right question that I should be asking and please let someone have had that same problem, so maybe I know what's going on.
JENNIFER: For someone who hasn't even internalized what a callback is and how it works and how you should think about it, they're in such a different position, they can't even really do anything with the result other than just paste it into their editor and see if it works.
ADRIANA: Yeah, it's basically like the blind programming at that point. I'm curious to see how things are going to pan out because there have been calls for legislation around blocking these AI tools to a certain extent, or limiting them. So I'm wondering how it's going to pan out. What are your thoughts?
JENNIFER: I hope that something happens, because yes. I also have thoughts on the way that these tools get built and all of the harvesting of data without permission that goes into the training sets and how opaque they are and whether what the results are and how they get used and by who and on who. These are all big questions. So yes, I do think that, like, having some sort of, you know, government regulation oversight is going to be important because, you know, like, the way these AI models are built involves a lot of, like, harvesting of you know, people's work uncompensated from the internet. It's a very extractive thing. And then they get turned into these computer systems that people use to make decisions. And you can't really inspect those decisions. And people don't really understand what they're doing and how they work and then who uses them and to do what and who benefits and who suffers for that are sort of like...I don't want to say open questions, because the answer generally is going to be the same as it usually is. People with privilege will benefit from them and people with that will suffer. That's not great for us, but that is where things are going.
ADRIANA: And what I find interesting about this whole thing, too, is that even some of the folks who are responsible for the creation of these technologies are sort of like, whoa, chill out. We gotta take a step back on this thing before it blows out of proportion, which I think is quite interesting.
Now, I don't want to sound alarmist, but every time I see how advanced things start getting have continued getting with AI, I can't help but get Terminator vibes. I don't think it'll be quite so drastic, but I'm like, man, Skynet might not be too far away.
JENNIFER: Maybe I think Skynet is the wrong thing to be concerned about, though. Well, I think it's important to note that the creators of all of these systems, what they're advocating for, is that other people should step back. They don't want governments to tell them to stop doing what they're doing. They just want governments to prevent other people from doing the same things and that's a different thing. And then you look at other AI systems that we have in the world, like self-driving cars. They get to a point where they can do simple things, fairly reliably in controlled settings, and then you unleash them on the real world and they're constantly going the wrong way down one way roads and, like, stopping for obstacles that don't exist and completely ignoring ones that do.
I don't think that language models are going to be all that different. They don't have a real understanding of what's happening around them. They're just doing pattern matching and there's going to be patterns that they haven't encountered before. And what does it do in that case? Who knows?
ADRIANA: And it's interesting that's for self driving cars, which can be scary enough if things go south, but then they're talking about...there's self-driving planes out there as well, which yeah, I feel like that's a whole other level of self-driving as well, which could be interesting.
JENNIFER: Yeah. And like and it's easy to see how, like, the danger in like, self driving vehicles and why you would want to be careful about that. But then you turn it into language models and it's just like the algorithm that does things, but what you get is like self-approving mortgages and that's not going to be like, different. Yeah. Like that's still going to hurt people.
ADRIANA: Yeah, absolutely. Wow. Damn. Yeah. The possibilities are endless, and it's not in a good way either. Cool. Well, I know we don't have too much time left, but I did also want to touch upon...you said when we were chatting earlier...you've got, like, a little project that you have that you've been working on around the Fediverse. I was wondering if you could tell us a little bit about that.
JENNIFER: Yeah. So I have recently started a Fediverse project. So, like, I call it Letterbook Social. It's a Mastodon-like microblogging service. And the thing that makes it different, other than being just not Mastodon, is that I'm trying to optimize for the needs of the operators because it's in talking to people who run Mastodon servers now that I've met some of them in the last, I don't know, what is it, eight months now? That's actually not a great experience. Mastodon doesn't have very good Observability and it is hard to scale and it's hard to deploy and the admin and the operator tools and the moderator tools, for that matter, are very primitive. And so what I want to do is solve for that. I want to make it easy to set up and easy to scale and easy to understand what the system is doing and be able to oversee it as the human being running it, which I think is particularly important in this case because these things are very frequently overseen by one single person. Which is that can be very stressful, to say the least.
ADRIANA: That is very cool. So are you building it on top of existing Mastodon code, or are you starting from scratch?
JENNIFER: This is from scratch, since it seems like getting changes into Mastodon is sort of an uphill climb. And so I decided that since I don't particularly know or like Ruby anyway, that I'll just do my own thing. And so now I have a C# project. Get back to that lightning question, and I get to work with C#. And I've been doing a lot of that, and it's going to be a while before it's a usable thing. But it's getting to a point where in the near future I'll have something that stands up and operates and I can start exchanging messages with other services.
ADRIANA: That is so cool. Really look forward to hearing more about that.
JENNIFER: Yeah. Well, I'm sure I will be talking about it a lot on the Fediverse once I have something a little bit more concrete to talk about.
ADRIANA: Very cool. Do you have anything on it right now, like any documentation or are you still so initial stages that it's like no, it's just the code.
JENNIFER: Yeah. So I'm doing this like it's open source. I would be happy to have people help and contribute it's on GitHub. Like Letterbook is, I think, a pretty easy word to search for. And if you want a URL, there's letterbookhq.com.
ADRIANA: Send me the URL and I'll include it in the show notes. Very cool.
JENNIFER: I haven't had opportunity to focus on the kinds of open source project maturity that would make it easy for people to jump in and start contributing. But if somebody is feeling adventuresome, I would love to have more help, and I would be more than happy to talk through how things are structured and what contributions people can make.
ADRIANA: Very awesome. So all you C# lovers out there, cool opportunity. Very cool. Well, we are coming up at time. Well, thank you so much, Jennifer, for joining today.
JENNIFER: This was lots of fun.
ADRIANA: Totally loved talking about AI stuff, your Observability endeavors, and your new little Fediverse project. Thank you so much for geeking out with me today. Don't forget to subscribe. Be sure to check out the show notes for additional resources and to connect with us and our guests on social media. Until next time...
JENNIFER: Peace out, geek out.
ADRIANA: Geeking Out is hosted and produced by me, Adriana Villela. I also compose and perform the theme music on my trusty clarinet. Geeking Out is also produced by my daughter, Hannah Maxwell, who, incidentally, designed all of the cool graphics. Be sure to follow us on all the socials by going to bento.me/geekingout.