In this engaging conversation, guest Dan Cross, a participant in tech discussions, joins experts Bryan Cantrill and Adam Leventhal to dive deep into debugging. They humorously touch on parenting mishaps, like being chased by a rooster, while also reflecting on the evolution of debugging practices. The group critiques existing tools, advocating for better terminology and methods. They explore the emotional toll of debugging in distributed systems and celebrate advancements in programming languages like Rust that enhance memory management, addressing common challenges faced by engineers.
Read more
AI Summary
AI Chapters
Episode notes
auto_awesome
Podcast summary created with Snipd AI
Quick takeaways
Social interactions on platforms like Twitter Spaces can create unexpected time commitments, leading users to stay beyond their initial interest.
The evolution of debugging tools highlights the need for effective software interaction analysis, addressing issues faced by system developers rather than just compiler experts.
A blend of traditional debugging techniques and modern observability frameworks is crucial for diagnosing complex software performance and reliability issues in real-world scenarios.
Deep dives
The Compulsion to Engage with Twitter Spaces
Many individuals experience an irresistible urge to participate in Twitter Spaces, regardless of the topic's relevance or interest. This phenomenon highlights a potential deficiency in self-control where users find themselves 'space shopping'—browsing and entering discussions that may not pertain to their interests. The incremental engagement often leads to a dilemma, as they feel a social obligation to stay involved, even if their initial intention was merely to observe. This behavior demonstrates how social interactions on platforms can create unforeseen time commitments and distract from other responsibilities.
The Complexity of Debugging Tools
The conversation reflects on the evolution and challenges of debugging tools like debuggers, which have become essential in software development. Participants share their journeys of creating debuggers, emphasizing the need for tools that can effectively decipher complex software interactions. They argue that current debuggers often reflect the priorities and experiences of compiler experts rather than system developers, leading to a disconnect in addressing real-world debugging issues. As a result, there's a call for a deeper understanding of software behavior and the development of tools that extend beyond traditional debugging methods.
The Need for Introspection Amidst Complexity
As software architectures become more sophisticated, understanding these systems through introspective tools is crucial for effective debugging. Participants debate how existing tools like DTrace have revolutionized their ability to observe and diagnose complex issues in real-time, yet they also note the limitations that arise from reliance on traditional debugging methods. A strong point is made about the importance of documenting and analyzing dead processes, which can yield valuable insights into underlying bugs that would otherwise remain hidden. This underscores the necessity for modern debugging practices that can interrogate software behavior more holistically.
Human-Machine Interaction in Debugging
The discussion touches on the challenges and barriers that developers face when adopting debugging tools. Many engineers often resort to print debugging due to its straightforwardness and universal applicability, while more complex debuggers present a high learning curve. This reluctance to utilize advanced debugging tools can hinder deep root cause analysis, especially when quick fixes are needed. Consequently, a focus on enhancing the user experience of these tools is deemed essential for broader acceptance and effective utilization in real-world scenarios.
Observability vs. Traditional Debugging
A tension exists between the usage of traditional debugging techniques and the rise of observability frameworks in modern software development. Participants emphasize that while tools like unit tests and CI/CD practices address many issues effectively, they do not encompass all potential system pathologies observed in production environments. The conversation argues that the future of debugging will involve a synthesis of observational techniques and traditional debugging, enhancing engineers' ability to diagnose performance and reliability issues. The recognition that ongoing development efforts are necessary to integrate these methodologies into standard practices highlights a path forward for the industry.
We’ve been holding a Twitter Space weekly on Mondays at 5p for about an hour. Even though it’s not (yet?) a feature of Twitter Spaces, we have been recording them all; here is the recording for our Twitter Space for June 21, 2021.
MDB Modular Debugger > Adam: I think people are using cargo-cult debugging, rather than getting to the root cause > of these things, or being satisfied until they get to the root cause. > Bryan: I think with software systems, it’s really hard to know what they’re actually doing.
“Runtime Performance Analysis of the M-to-N Scheduling Model” (pdf) 1996 undergrad thesis (Brown CS dept website)
[@6:29](https://youtu.be/UOucW3F7nCg?t=389) Threadmon website and 1997 paper (a retooling of the ’96 paper) > When I built that tooling, it revealed this thing > is not doing at all what anyone thought it was doing.
TNF Trace Normal Form > Part of the problem with debuggers… debuggers are historically written by compiler folks, > and not system folks. As a result, debuggers are designed to debug the problem that > compiler folks have the most familiarity with, and that’s a compiler. > Debuggers are designed for reproducible problems, way too frequently.
I view in situ breakpoint debugging as one sliver of debugging that’s useful for one particular and somewhat unusual class of bugs. That’s actually not the kind of debugger I want to use most of the time.
[@11:59](https://youtu.be/UOucW3F7nCg?t=719) > libdis was my intern project in 2000. The idea was to take the program text, > and interpret it in some structural form, and try to infer different things about the program.
Volatility: the memory forensics framework Adam couldn’t quite remember.
[@14:59](https://youtu.be/UOucW3F7nCg?t=899) I meant this question earnestly, what is a debugger?
The first bug > The term is somewhat regrettable… It implies a problem, when there may not be a problem. > It may just be I want to understand how the system is operating, independent of whether > it’s doing it badly.
Oxide’s embedded OS and companion debugger: Hubris and Humility
[@19:01](https://youtu.be/UOucW3F7nCg?t=1141) Using DTrace to help customers understand their systems. > If you strings the DTrace binary, > you’re not gonna find any mention of raincoats.
[@22:13](https://youtu.be/UOucW3F7nCg?t=1333) Cardinal rule of debuggers: Don’t kill the patient! (see also: Do No Harm) > Not killing the patient is really important, > this was always an Ur principle for us.
The notion that the debugger has now become load bearing in the execution of the program, is a pretty grave responsibility.
[@26:54](https://youtu.be/UOucW3F7nCg?t=1614) Post-mortem debugging > It is a tragedy of our domain that we do not debug post-mortem, routinely.
Heisenbug (when the act of observing the problem, hides the problem)
[@31:11](https://youtu.be/UOucW3F7nCg?t=1871) > What’s going on in the system? It’s not crashing, there’s no core dump. > But the system is behaving in a way I didn’t expect it to, and I want to know why.
[@33:51](https://youtu.be/UOucW3F7nCg?t=2031) Pre-production reliability techniques > All of our pre-production work has gotten way better than it was, and I think that’s > compensation for the fact we can’t understand these systems when we deploy them.
[@37:58](https://youtu.be/UOucW3F7nCg?t=2278) > The move to testing has in fact obviated some of the need for > what we consider traditional debuggers. > (Bryan audibly cringes)
[@39:08](https://youtu.be/UOucW3F7nCg?t=2348) Automated and Algorithmic Debugging conference AADEBUG 2003
HOPL History of Programming Languages > There was a test suite of excellence when it comes to automated program debugging. > And it was some pile of C programs with known bugs, and you would throw your new > paper at it, and it would find 84% of the bugs, and there would be a lot of > slapping each other on the back on that. Really focused on the simplest of simple bugs.
[@43:15](https://youtu.be/UOucW3F7nCg?t=2595) Bryan’s Postmortem Object Type Identification paper > Who is my neighbor in memory? Because my neighbor just burned down my house basically.
mdb’s ::kgrep > I need to pause you there because it’s so crazy, and I want to emphasize that > he means what he’s saying. We look for the 64 bit value, and see where we find it. > This is a game of bingo across the entire address space.
We can follow the pointers and propagate types.
[@48:49](https://youtu.be/UOucW3F7nCg?t=2929) printf/println debugging – everyone’s doing it > I think it’s a mistake for people to denigrate printf debugging. > If you’ve got a situation that you ca...
Get the Snipd podcast app
Unlock the knowledge in podcasts with the podcast player of the future.
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode
Save any moment
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Share & Export
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode