AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Introduction
safety of work podcast, episode 87. Today we're asking the question, what exactly is systems thinking? We'll be looking at foundational papers from some of the more popular authors in safety. If you haven't listened to Episode 86 yet, go back and take your lesson.
We will review each section of Leveson’s paper and discuss how she sets each section up by stating a general assumption and then proceeds to break that assumption down.We will discuss her analysis of:
Discussion Points:
Quotes:
“Leveson says, ‘If we can get it right some of the time, why can’t we get it right all of the time?’” - Dr. David Provan
“Leveson says, ‘the more complex your system gets, that sort of local autonomy becomes dangerous because the accidents don’t happen at that local level.’” - Dr. Drew Rae
“In linear systems, if you try to model things as chains of events, you just end up in circles.’” - Dr. Drew Rae
“‘Never buy the first model of a new series [of new cars], wait for the subsequent models where the engineers had a chance to iron out all the bugs of that first model!” - Dr. David Provan
“Leveson says the reason systemic factors don’t show up in accident reports is just because its so hard to draw a causal link.’” - Dr. Drew Rae
“A lot of what Leveson is doing is drawing on a deep well of cybernetics theory.” - Dr. Drew Rae
Resources:
Applying Systems Thinking Paper by Leveson
Nancy Leveson– Full List of Publications
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode