

Arrested DevOps
Matt Stratton, Trevor Hess, Jessica Kerr, and Bridget Kromhout
Arrested DevOps is the podcast that helps you achieve understanding, develop good practices, and operate your team and organization for maximum DevOps awesomeness.
Episodes
Mentioned books

Oct 1, 2025 • 40min
How AI Is Changing the SDLC With Hannah Foxwell and Robert Werner
The Trust Problem Returns
Hannah Foxwell, who has spent over a decade in DevOps and platform engineering, draws a striking parallel to earlier transformations: “It used to be that testers didn’t trust developers and ops didn’t trust testers and there were all these silos. Now we’re putting AI agents in the mix. Can we trust them? Should we trust them?”
This isn’t just déjà vu—it’s a fundamental challenge that resurfaces with every major shift in how we build software. As Robert Werner points out, management had to give up control and push trust to the edges of organizations during the agile transformation. With cloud adoption came self-service and automation. Now, with AI, we’re dealing with non-deterministic black boxes that we need to trust to be “right often enough.”
The Fluency Gap
One of the biggest challenges isn’t the technology itself—it’s the lack of shared understanding. Hannah launched “AI for the Rest of Us,” a community now with over 1,000 members, after realizing that AI fluency is essential for making good decisions about where and how to use these tools.
“I went to a talk at a conference thinking I’d learn about AI in one talk and become an expert by tomorrow,” Hannah recalls. “It just didn’t happen like that. There’s a whole new domain with new vocabulary, new concepts, new techniques.”
The community focuses on making AI accessible without dumbing it down—providing talks and content that explain complex concepts in simple language so more people can participate in the conversation about AI’s role in software development.
The Speed-Responsibility Paradox
The technology is evolving so rapidly that best practices barely have time to solidify before they’re obsolete. Robert describes how hiring strategies at startups are changing every few weeks as new capabilities emerge. “Things that weren’t feasible last week are suddenly possible,” he notes.
But this speed creates a dangerous tension. Organizations are pushing hard for AI adoption while the guardrails, workflows, and cultural practices needed to use it safely are still being figured out. As Matty observes, this leads to perverse incentives—developers required to “use AI” who find ways to tick the box without actually deriving value, just like teams that once added meaningless tests to meet sprint requirements.
Who Owns the Code?
A critical question emerges: if AI generates the code, who owns it? Who’s responsible when something goes wrong?
Hannah frames it in familiar DevOps terms: “Does anybody really want to own a service if they didn’t write it and they don’t understand how it works? It’s the ops challenge again—AI throwing code over the wall to us.”
Robert’s answer is pragmatic and honest: humans will need to take responsibility for validating AI-generated code, even if it’s tedious work most developers won’t enjoy. His company, Leap, is building tools specifically to make that verification process as convenient and enjoyable as possible, because he believes there’s simply no other way to do it safely.
The Documentation Double-Bind
There’s an ironic twist in how AI agents work best: they need excellent documentation. Organizations improving their documentation to support AI-powered development are inadvertently following DevOps best practices that benefit human developers too.
But as Matty discovered building his own project, AI-generated documentation can be dangerously unreliable. The tools will confidently document features that don’t exist, pulling from incomplete PRDs or speculative notes in the codebase. Great documentation trains better agents, but agents shouldn’t write that documentation—creating a challenge that requires human judgment and oversight.
Lessons from Past Transformations
The parallels to earlier shifts are instructive. Hannah remembers enterprise clients who insisted continuous delivery would “never work here.” Now it’s common practice. The same resistance appeared with cloud adoption and agile methodologies.
What worked then still matters now:
Guardrails enable freedom: Constraints and safety nets let people explore confidently
Make the right way the easy way: Transformations succeed when good practices are more convenient than bad ones
Community and shared learning: Success stories and failures shared openly help everyone navigate change faster
Start with good practices: Teams with solid engineering fundamentals—blue-green deployments, A/B testing, safe-to-fail production environments—are better positioned to benefit from AI-assisted development
Practical Advice for Explorers
For developers and teams trying to navigate this transformation, Hannah and Robert offer grounded guidance:
Keep your eyes open: Watch for patterns of success and failure. Who’s making this work, and what do they have in common?
Build community: Find or create spaces where people can share honestly about what’s working and what isn’t, without the pressure to pretend everything’s perfect.
Be selective about information sources: With so much noise and hype, focus on quality outlets. Ignore things for a few weeks, and if they keep coming up, that’s when to invest your time.
Practice regularly: The technology evolves so fast that hands-on experience goes stale quickly. Even if it’s not your main job, refresh your skills every few months.
Be specific and constrained: AI coding assistants work best with clear, narrow requests. Frustration comes from asking too much or being too vague.
The Future We’re Building
We’re in the Nokia phone stage of AI-assisted development, as Robert puts it—the technology will look completely different in just a few years. But unlike waiting passively for that future to arrive, developers and teams are actively creating it through the choices they make today about how to integrate these tools.
The question isn’t whether AI will transform software development—it already is. The question is whether we’ll learn from past transformations to build better practices, stronger safety nets, and more trustworthy systems. Or whether we’ll repeat old mistakes at unprecedented speed.
As Hannah emphasizes, having more people with AI fluency means better conversations and better decisions at a pivotal moment in history. The rollercoaster is moving whether we’re ready or not. The best approach is to keep your eyes open, stay connected to community, and remain thoughtfully critical about what works and what doesn’t.
Learn more about Hannah’s work at AI for the Rest of Us. Use code ADO20 for 20% off tickets to their London conference on October 15-16, 2025.

Aug 25, 2025 • 29min
Digging Into Security With Kat Cosgrove
Security: the one topic that’s guaranteed to turn any DevOps conversation into a mix of fear, eye rolls, and nervous laughter. In this episode of Arrested DevOps, Matty welcomes back Kat Cosgrove to talk about the “never not hot” world of security and why it’s always lurking just over your shoulder (like that one compliance auditor who swears they’re just “observing”).
Kat and Matty cover:
Why vulnerabilities never seem to stop showing up in your containers (spoiler: they don’t).
How teams can respond without spiraling into full-blown panic.
The realities of securing Kubernetes and containerized environments (without pretending there’s a magic “easy button.”)
Why security culture matters as much as the tools you’re using.
Along the way, expect the usual mix of snark, sarcasm, and the occasional tangent about how everything in tech eventually becomes a security problem.
If you’ve ever patched the same vulnerability three times in a week, or found yourself yelling at a CVE like it’s a personal enemy, this one’s for you.

Jun 3, 2025 • 40min
AI, Ethics, and Empathy With Kat Morgan
We’ve all been there: burning out on volatile tech jobs, tangled in impossible systems, and wondering what our work actually means. On this episode of Arrested DevOps, Matty Stratton sits down with Kat Morgan for a heartfelt, funny, and sharply observant conversation about AI: what it helps with, what it hurts, and how we navigate all of that as humans in tech.
They dive deep into how large language models (LLMs) both assist and frustrate us, the ethics of working with machines trained on the labor of others, and why staying kind—to the robots and to ourselves—might be one of the most important practices we have.
“We actually have to respect our own presence enough to appreciate that what we put out in the world will also change ourselves.” – Kat Morgan
Topics
Why strong opinions about AI often miss the nuance
Using LLMs to support neurodivergent workflows (executive function as a service!)
Treating agents like colleagues and the surprising benefits of that mindset
Code hygiene, documentation, and collaborating with AI in GitHub issues
Building private, local dev environments to reduce risk and improve trust
Ethical tensions: intellectual property, environmental impact, and the AI value chain
Why we should be polite to our agents—and what that says about how we treat people
Key Takeaways
AI isn’t magic, but it can be a helpful colleague. Kat shares how she uses LLMs to stay on task, avoid executive dysfunction, and manage complex projects with greater ease.
Good context design matters. When working with AI, things like encapsulated code, clean interfaces, and checklists aren’t just best practices. They’re vital for productive collaboration.
Skepticism is healthy. Kat reminds us that while AI can be useful, it also messes up. A lot. And without guardrails and critical thinking, it can become more of a liability than a partner.
Build humane systems. From privacy risks to climate concerns, this episode underscores that responsible AI use requires ethical intent, which starts with practitioners.

Feb 1, 2024 • 35min
Open Communities With Andrew Zigler
Openness plays a significant role in propelling DevOps and organizational processes forward. This is not to imply that everything must be open, but the default should be openness unless a valid reason indicates otherwise.
Andrew Zigler, developer advocate at Mattermost, and Matty from Arrested DevOps recently shared insights on this subject. They discussed creating impactful developer advocates, managing community writing programs, and dealing with the challenges of open source communities.
The Importance of Open Source in Communities
Andrew emphasizes that the loudest and most contributory voices in open source projects are usually the paid internal staff. However, he champions setting up pathways in the community to validate the experience of all contributors and reward them with anything from thought leadership, platforms, or even swag. The key is to influence individuals at all levels of engagement and ensure that they feel they own part of what they are contributing.
One of the challenges he identified is over-influencing which often stems from the fact that the paid staff are the ones driving the open source project vehicle. This imbalance usually drowns out the voices of other contributors, particularly those who may not have the luxury of dedicating as much time and energy to the project as the paid staff.
Andrew suggests a solution: the company creating more developer advocates through the multiplier effect. This means ensuring that everyone across the board understands the importance of the open-source community and empowers them to contribute. The more developers contribute, the larger and more diversified the community becomes, leading to better outcomes and solutions.
The Critical Role of Leadership in Open Source Communities
Matty highlights how vital leadership is in these initiatives. By allocating resources, prioritizing open source community engagement, and maintaining a strategic focus, leaders can do much to foster a healthy open-source community. Successful leaders understand that engagement levels differ, so they create opportunities for different levels of contributors to partake and contribute to the community.
To ensure the project remains harmonious and aligned with company goals, the leadership should give equal weight to both staff and contributors’ voices. In the end, everyone involved in the project is part of the community.
Engineering Blogs: The Balance of Output
The conversation took an interesting turn when they started discussing engineering blogs, a tricky subject for many organizations. Matty points out that these blogs have the tendency to publish sporadically, often dominated by lengthy droughts of content or a sudden overflow of posts.
Such inconsistency happens when the contributors, mostly engineers, write when they can spare the time. Balancing this dynamic is crucial, and one suggested solution is to involve people whose primary job is creating content. They can collaborate with subject matter experts to create consistent, relevant content.
Conclusion
Operating under a default open environment for your projects does not mean that everything has to be open. Nevertheless, transparency and openness should be the norm unless necessary otherwise. By dealing with the occasional echo chamber and understanding that contributions will always ebb and flow, the community will thrive and keep moving forward.
In line with the open source spirit, scaling advocacy is crucial in DevOps. It involves not only the individuals whose title is developer advocate but everyone within the company. By creating more advocates and amplifying community efforts, the DevOps movement continues seamlessly.
Links
Matty’s blog post about SharePoint (including broken images!)
Community Pulse podcast

Jan 18, 2024 • 48min
Machine Learning Ops With Chelsea Troy
Chelsea Troy, a writer specialized in Machine Learning Ops from Mozilla, discusses transitioning to ML operations and challenges in deploying models. Topics include evaluating operationalization products, balancing metrics in decision making, and navigating complexities in ML operations.

Jan 4, 2024 • 1h 49min
It's Been Ten Years of ADO, Charlie Brown
Every ADO Cold Open Ever
“Episode 0” of ADO
“Old Geeks Yell At Cloud” video

Dec 22, 2023 • 48min
So You’re in Charge Now… With Ben Greenberg
The First 90 Days
“It’s not a promotion - it’s a career change” (Lindsay Holmwood)
“Not All Leaders Are Managers” - (Aaron Bassett)

Dec 7, 2023 • 30min
DevOps Isn’t a Department With Jeremy Duvall
John Willis’s talk at DevOpsDays Atlanta 2016 on Burnout
https://platformengineering.org/talks-library/internal-platform-enterprise-courtney-kissler
ADO - How to Eff Up Devops with Pete Cheslock, Nathen Harvey, and Randi Harper

Nov 23, 2023 • 39min
Runtime Analysis With Brian Kelly
OWASP Top 10
Stripe: The developer coefficient (quantifies the cost of bad code to companies to be $59B annually)
Facebook: FAUSTA: Scaling Dynamic Analysis with Traffic Generation (how runtime analysis was used at WhatsApp to catch design flaws before they reached production)
Dragan Stepanović - Async code reviews are choking your company’s throughput (from LAS 2022, a talk which highlights the systemic problems with developers trying to do manual code reviews of large PRs)
AppMap, the runtime analysis company which Brian works for
Cloud Native Security with Michael Isbitski ADO Episode

Nov 9, 2023 • 47min
Complexity With Michael Stahnke
Michael Stahnke, a DevOps expert currently at Phlox, delves into the complexities of modern technology and operational efficiency. He questions whether our systems truly need to be as complicated as they have become. The discussion highlights the challenges of Kubernetes adoption and emphasizes the need for simple, context-driven solutions in development environments. Stahnke also reflects on tech evolution, advocating for a focus on business needs and the importance of recognizing individual contributions within organizations.