The Safety of Work cover image

The Safety of Work

Latest episodes

undefined
Apr 30, 2023 • 59min

Ep. 109 Do safety performance indicators mean the same thing to different stakeholders?

Explore the varying interpretations of safety performance indicators among stakeholders. Highlight the significance of safety indicators in shaping organizations. Discuss the challenges of measuring availability in air ambulance services. Emphasize the importance of questioning assumptions in procurement and the gig economy. Introduce the concept of boundary objects and their impact on perceptions of availability. Explore conflicting meanings of availability and safety in air ambulance services. Discuss the tension between safety and operational deliverables in various industries.
undefined
Apr 9, 2023 • 55min

Ep. 108 Could a 4 day work week improve employee well-being?

This report details the full findings of the world’s largest four-day working week trial to date, comprising 61 companies and around 2,900 workers, that took place in the UK from June to December 2022. The design of the trial involved two months of preparation for participants, with workshops, coaching, mentoring and peer support, drawing on the experience of companies who had already moved to a shorter working week, as well as leading research and consultancy organisations. The report results draw on administrative data from companies, survey data from employees, alongside a range of interviews conducted over the pilot period, providing measurement points at the beginning, middle, and end of the trial. Discussion Points:Background on the five-day workweekWe’ll set out to prove or review two central claims:Reduce hours worked, and maintain same productivityReduced hours will provide benefits to the employeesDigging in to the Autonomy organization and the researchers and authorsSays “trial” but it’s more like a pilot program61 companies, June to December 2022Issues with methodology - companies will change in 6 months coming out of Covid- a controlled trial would have been betterThe pilot only includes white collar jobs - no physical, operational, high-hazard businessesThe revenue numbersAnalysing the staff numbers- how many filled out the survey? What positions did the respondents hold in the company?Who experienced positive vs. negative changes in individual resultsInterviews from the “shop floor” was actually CEOs and office staffEliminating wasted time from the five-day weekWhat different companies preferred employees to do with their ‘extra time’Assumption 1: there is a business use case benefit- not trueAssumption 2: benefits for staff - mixed resultsTakeaways:Don’t use averagesFinding shared goals can be good for everyoneBe aware of burden-shiftingThe answer to our episode’s question – It’s a promising idea, but results are mixed, and it requires more controlled trial research Quotes:“It’s important to note that this is a pre-Covid idea, this isn’t a response to Covid.” - Dr. Drew“...there's a reason why we like to do controlled trials. That reason is that things change in any company over six months.” - Drew“ …a lot of the qualitative data sample is very tiny. Only a third of the companies got spoken to, and only one senior representative who was already motivated to participate in the trial, would like to think that anything that their company does is successful.” - David“I'm pretty sure if you picked any company, you're taking into account things like government subsidies for Covid, grants, and things like that. Everyone had very different business in 2021-2022.” - Drew“We're not trying to accelerate the pace of work, we're trying to remove all of the unnecessary work.” - Drew“I think people who plan the battle don't battle the plan. I like collaborative decision-making in general, but I really like it in relation to goal setting and how to achieve those goals.” - David Resources:Link to the Pilot StudyAutonomyThe Harwood Experiment EpisodeThe Safety of Work PodcastThe Safety of Work on LinkedInFeedback@safetyofwork
undefined
Mar 12, 2023 • 47min

Ep. 107 What research is needed to implement the Safework Australia WHS strategy?

Summary: The purpose of the Australian Work Health and Safety (WHS) Strategy 2023–2033 (the Strategy) is to outline a national vision for WHS — Safe and healthy work for all — and set the platform for delivering on key WHS improvements. To do this, the Strategy articulates a primary goal supported by national targets, and the enablers, actions and system-wide shifts required to achieve this goal over the next ten years. This Strategy guides the work of Safe Work Australia and its Members, including representatives of governments, employers and workers – but should also contribute to the work and understanding of all in the WHS system including  researchers, experts and practitioners who play a role in owning, contributing to and realising the national vision. Discussion Points:Background on Safe Work Australia The strategy includes six goals for reducing:Worker fatalities caused by traumatic injuries by 30%          The frequency rate of serious claims resulting in one or more weeks off work by 20%       The frequency rate of claims resulting in permanent impairment by 15%    The overall incidence of work-related injury or illness among workers to below 3.5%         The frequency rate of work-related respiratory disease by 20% No new cases of accelerated silicosis by 2033The strategy is a great opportunity to set a direction for research and educationFive actions covered by the strategy:Information and raising awarenessNational CoordinationData and intelligence gatheringHealth and safety leadershipCompliance and enforcementWhen regulators fund research - they demand tangible results quicklyMany safety documents and corporate safety systems never reach the most vulnerable workers, who don’t have ‘regular’ long-term jobsStandardization can increase unnecessary workWhen and where do organizations access safety information?Data - AI use for the futureStrategy lacks milestones within the ten-year spanEnforcement - we don’t have evidence-based data on the effectsTakeaways:The idea of a national strategy? Good.Balancing safety with innovation, evidenceAnswering our episode question: Need research into specific workforces, what is the evidence behind specific industry issues.  “Lots of research is needed!” Quotes:“The fact is, that in Australia, traumatic injury fatalities - which are the main ones that they are counting - are really quite rare, even if you add the entire country together.” - Drew“I really see no point in these targets. They are not tangible, they’re not achievable, they’re not even measurable, with the exception of respiratory disease…” - Drew“These documents are not only an opportunity to set out a strategic direction for research and policy, and industry activity, but also an opportunity to educate.” - David“When regulators fund research, they tend to demand solutions. They want research that’s going to produce tangible results very quickly.” - Drew“I would have loved a concrete target for improving education and training- that is something that is really easy to quantify.” - Drew Resources:Link to the strategy documentThe Safety of Work PodcastThe Safety of Work on LinkedInFeedback@safetyofwork
undefined
Feb 19, 2023 • 55min

Ep. 106 Is it possible to teach critical thinking?

Baron's work focuses primarily on judgment and decision-making, a multi-disciplinary area that applies psychology to problems of ethical decisions and resource allocation in economics, law, business, and public policy.  The paper’s summary:Recent efforts to teach thinking could be unproductive without a theory of what needs to be taught and why. Analysis of where thinking goes wrong suggests that emphasis is needed on 'actively open-minded thinking'. including the effort to search for reasons why an initial conclusion might be wrong, and on reflection about rules of inference, such as heuristics used for making decisions and judgments. Such instruction has two functions. First. it helps students to think on their own. Second. it helps them to understand the nature of expert knowledge, and, more generally, the nature of academic disciplines. The second function, largely neglected in discussions of thinking instruction. can serve as the basis for thinking instruction in the disciplines. Students should learn how knowledge is obtained through actively open-minded thinking. Such learning will also teach students to recognize false claims to systematic knowledge. Discussion Points:Critical thinking and Chat AI Teaching knowledge vs. critical thinkingSection One: Introduction- critical thinking is a stated goal of many teaching institutionsSection Two: The Current Rationale/What is thinking? Reading about thinking is quite difficult!Baron’s “Myside Bias” is today’s confirmation or selection biasReflective learning- does it help with learning?Section Three: Abuses - misapplying thinking in schools and businessBreaking down learning into sub-sectionsSection Four: The growth of knowledge - beginning in Medieval timesSection Five: The basis of expertise - what is an ‘expert’? Every field has its own self-critiquesDrew’s brain is hurting just getting through this discussionSection Six: What the educated person should knowStudying accidents in safety science - student assignmentsTakeaways:Good thinking means being able to make good decisions re: expertsPrecision is required around what is necessary for learningWell-informed self-criticism is necessary Answering our episode question: Can we teach critical thinking? It was never answered in this paper, but it gave us a lot to think about Quotes:“It’s a real stereotype that old high schools were all about rote learning. I don’t think that was ever the case. The best teachers have always tried to inspire their students to do more than just learn the material.” - Drew“Part of the point he’s making is, is that not everyone who holds themself out to be an expert IS an expert…that’s when we have to have good thinking tools .. who IS an expert and how do we know who to trust?” - Drew“Baron also says that even good thinking processes won’t necessarily help much when specific knowledge is lacking…” - David‘The smarter students are, the better they are at using knowledge about cognitive biases to criticize other people’s beliefs, rather than to help themselves think more critically.” - Drew“Different fields advance by different sorts of criticism..to understand expertise a field you need to understand how that field does its internal critique.” - Drew Resources:Link to the paperThe Safety of Work PodcastThe Safety of Work on LinkedInFeedback@safetyofwork
undefined
Feb 5, 2023 • 44min

Ep. 105 How can organisations learn faster?

You’ll hear a little about Schein’s early career at Harvard and MIT, including his Ph.D. work – a paper on the experience of POWs during wartime contrasted against the indoctrination of individuals joining an organization for employment. Some of Schein’s 30-year-old concepts that are now common practice and theory in organizations, such as “psychological safety” Discussion Points:A brief overview of Schein’s career, at Harvard and MIT’s School of Management and his fascinating Ph.D. on POWs during the Korean WarA bit about the book, Humble InquiryDigging into the paperThree types of learning:Knowledge acquisition and insight learningHabits and skillsEmotional conditioning and learned anxietyPractical examples and the metaphor of Pavlov’s dogCountering Anxiety I with Anxiety IIThree processes of ‘unfreezing’ an organization or individual to change:DisconfirmationCreation of guilt or anxietyPsychological safetyMistakes in organizations and how they respondThere are so many useful nuggets in this paperSchein’s solutions: Steering committees/change teams/groups to lead the organizations and manage each other’s anxietyTakeaways:How an organization deals with mistakes will determine how change happensAssessing levels of fear and anxietyKnow what stands in your way if you want progressAnswering our episode question: How can organizations learn faster? 1) Don't make people afraid to enter the green room. 2) Or make them more afraid to stand on the black platform. Quotes:“...a lot of people credit [Schein] with being the granddaddy of organizational culture.” - Drew“[Schein] says .. in order to learn skills, you've got to be willing to be temporarily incompetent, which is great if you're learning soccer and not so good if you're learning to run a nuclear power plant.” - Drew“Schein says quite clearly that punishment is very effective in eliminating certain kinds of behavior, but it's also very effective in inducing anxiety when in the presence of the person or the environment that taught you that lesson.” - Drew“We've said before that we think sometimes in safety, we're about three or four decades behind some of the other fields, and this might be another example of that.” - David“Though curiosity and innovation are values that are praised in our society, within organizations and particularly large organizations, they're not actually rewarded.” - Drew Resources:Link to the paperHumble Inquiry by Edgar ScheinThe Safety of Work PodcastThe Safety of Work on LinkedInFeedback@safetyofwork
undefined
Jan 22, 2023 • 46min

Ep. 104 How can we get better at using measurement?

You’ll hear some dismaying statistics around the validity of research papers in general, some comments regarding the peer review process, and then we’ll dissect each of six questions that should be asked BEFORE you design your research. The paper’s abstract reads:In this article, we define questionable measurement practices (QMPs) as decisions researchers make that raise doubts about the validity of the measures, and ultimately the validity of study conclusions. Doubts arise for a host of reasons, including a lack of transparency, ignorance, negligence, or misrepresentation of the evidence. We describe the scope of the problem and focus on how transparency is a part of the solution. A lack of measurement transparency makes it impossible to evaluate potential threats to internal, external, statistical-conclusion, and construct validity. We demonstrate that psychology is plagued by a measurement schmeasurement attitude: QMPs are common, hide a stunning source of researcher degrees of freedom, and pose a serious threat to cumulative psychological science, but are largely ignored. We address these challenges by providing a set of questions that researchers and consumers of scientific research can consider to identify and avoid QMPs. Transparent answers to these measurement questions promote rigorous research, allow for thorough evaluations of a study’s inferences, and are necessary for meaningful replication studies. Discussion Points:The appeal of the foundational question, “are we measuring what we think we’re measuring?”Citations of studies - 40-93% of studies lack evidence that the measurement is validPsychological research and its lack of defining what measures are used, and the validity of their measurement, etc.The peer review process - it helps, but can’t stop bad research being publishedWhy care about this issue? Lack of validity- the research answer may be the oppositeDesigning research - like choosing different paths through a gardenThe six main questions to avoid questionable measurement practices (QMPs)What is your construct? Why/how did you select your measure?What measure to operationalize the construct?How did you quantify your measure?Did you modify the scale? How and why?Did you create a measure on the fly? Takeaways:Expand your methods section in research papersAsk these questions before you design your researchAs research consumers, we can’t take results at face valueAnswering our episode question: How can we get better? Transparency is the starting point. Resources:Link to the paperThe Safety of Work PodcastThe Safety of Work on LinkedInFeedback@safetyofwork
undefined
Dec 4, 2022 • 1h 1min

Ep. 103 Should we be happy when our people speak out about safety?

In concert with the paper, we’ll focus on two major separate but related Boeing 737 accidents: Lyon Air #610 in October 2018 - The plane took off from Jakarta and crashed 13 mins later, with one of the highest death tolls ever for a 737 crash - 189 souls.Ethiopian Airlines #30 in March 2019 - This plane took off from Addis Ababba and crashed minutes into takeoff, killing 157. The paper’s abstract reads:Following other contributions about the MAX accidents to this journal, this paper explores the role of betrayal and moral injury in safety engineering related to the U.S. federal regulator’s role in approving the Boeing 737MAX—a plane involved in two crashes that together killed 346 people. It discusses the tension between humility and hubris when engineers are faced with complex systems that create ambiguity, uncertain judgements, and equivocal test results from unstructured situations. It considers the relationship between moral injury, principled outrage and rebuke when the technology ends up involved in disasters. It examines the corporate backdrop against which calls for enhanced employee voice are typically made, and argues that when engineers need to rely on various protections and moral inducements to ‘speak up,’ then the ethical essence of engineering—skepticism, testing, checking, and questioning—has already failed. Discussion Points:Two separate but related air disastersThe Angle of Attack Sensor MCAS (Maneuvering Characteristics Augmentation System) on the Boeing 737Criticality rankingsThe article - Joe Jacobsen, an engineer/whistleblower who came forwardThe claim is that engineers need more moral courage/convictions and training in ethicsDefining moral injury Engineers - the Challenger accident, the Hyatt collapseDisaster literacy – check out the old Disastercast podcastHumility and hubrisRegulatory bodies and their issuesSolutions and remediesRisk assessmentsOther examples outside of BoeingTakeaways:Profit vs. risk, technical debtDon’t romanticize ethicsInternal emails can be your downfallRewards, accountability, incentivesLook into the engineering resourcesAnswering our episode question: In this paper, it's a sign that things are bad. Quotes:“When you develop a new system for an aircraft, one of the first safety things you do is you classify them according to their criticality.” - Drew“Just like we tend to blame accidents on human error, there’s a tendency to push ethics down to that front line.” - Drew“There’s this lasting psychological/biological behavioral, social or even spiritual impact of either perpetrating, or failing to prevent, or bearing witness to, these acts that transgress our deeply held moral beliefs and expectations.” - David“Engineers are sort of taught to think in these binaries, instead of complex tradeoffs, particularly when it comes to ethics.” - Drew“Whenever you have this whistleblower protection, you’re admitting that whistleblowers are vulnerable.” - Drew“Engineers see themselves as belonging to a company, not to a profession, when they’re working.” - Drew Resources:Link to the paperThe Safety of Work PodcastThe Safety of Work on LinkedInFeedback@safetyofwork
undefined
Nov 15, 2022 • 42min

Ep. 102 What's the right strategy when we can't manage safety as well as we'd like to?

The paper’s abstract reads:Healthcare systems are under stress as never before. An aging population, increasing complexity and comorbidities, continual innovation, the ambition to allow unfettered access to care, and the demands on professionals contrast sharply with the limited capacity of healthcare systems and the realities of financial austerity. This tension inevitably brings new and potentially serious hazards for patients and means that the overall quality of care frequently falls short of the standard expected by both patients and professionals. The early ambition of achieving consistently safe and high-quality care for all has not been realised and patients continue to be placed at risk. In this paper, we ask what strategies we might adopt to protect patients when healthcare systems and organisations are under stress and simply cannot provide the standard of care they aspire to. Discussion Points:Extrapolating out from the healthcare focus to other businessesThis paper was published pre-pandemicAdaptations during times of extreme stress or lack of resources - team responses will varyPeople under pressure adapt, and sometimes the new conditions become the new normalGuided adaptability to maintain safetySubstandard care in French hospitals in the studyThe dynamic adjustment for times of crisis vs. long-term solutionsShort-term adaptations can impede development of long-term solutionsFour basic principles in the paper:Giving up hope of returning to normalWe can never eliminate all risks and threatsPrincipal focus should be on expected problemsManagement of risk requires engagement and action at all managerial levelsGriffith university’s rules on asking for an extension…expected surprisesMiddle management liaising between frontlines and executivesManaging operations in “degraded mode” and minimum equipment listsAbsolute safety - we can’t aim for 100% - we need to write in what “second best” coversTakeaways:Most industries are facing more pressure today than in the past, focus on the current risksAll industries have constant risks and tradeoffs - how to address at each levelUnderstand how pressures are being faced by teams, what adaptations are acceptable for short and long term?For expected conditions and hazards, what does “second best” look like?Research is needed around “degraded operations”Answering our episode question: The wrong answer is to only rely on the highest standards which may not be achievable in degraded operations Quotes:“I think it’s a good reflection for professionals and organistions to say, “Oh, okay - what if the current state of stress is the ‘new normal’ or what if things become more stressed? Is what we’re doing now the right thing to be doing?” - David“There is also the moral injury when people who are in a ‘caring’ profession and they can’t provide the standard of care that they believe to be right standard.” - Drew“None of these authors share how often these improvised solutions have been successful or unsuccessful, and these short-term fixes often impede the development of longer-term solutions.” - David“We tend to set safety up almost as a standard of perfection that we don’t expect people to achieve all the time, but we expect those deviations to be rare and correctable.” - Drew Resources:The Safety of Work PodcastThe Safety of Work on LinkedInFeedback@safetyofwork
undefined
Oct 30, 2022 • 1h 1min

Ep. 101 When should incidents cause us to question risk assessments?

The paper’s abstract reads:This paper reflects on the credibility of nuclear risk assessment in the wake of the 2011 Fukushima meltdown. In democratic states, policymaking around nuclear energy has long been premised on an understanding that experts can objectively and accurately calculate the probability of catastrophic accidents. Yet the Fukushima disaster lends credence to the substantial body of social science research that suggests such calculations are fundamentally unworkable. Nevertheless, the credibility of these assessments appears to have survived the disaster, just as it has resisted the evidence of previous nuclear accidents. This paper looks at why. It argues that public narratives of the Fukushima disaster invariably frame it in ways that allow risk-assessment experts to “disown” it. It concludes that although these narratives are both rhetorically compelling and highly consequential to the governance of nuclear power, they are not entirely credible. Discussion Points:Following up on a topic in episode 100 - nuclear safety and risk assessmentThe narrative around planes, trains, cars and nuclear - risks vs. safetyPlanning for disaster when you’ve promised there’s never going to be a nuclear disasterThe 1975 WASH-1400 StudiesJapanese disasters in the last 100 yearsFour tenets of Downer’s paper:The risk assessments themselves did not fail Relevance Defense: The failure of one assessment is not relevant to the other assessmentsCompliance Defense: The assessments were sound, but people did not behave the way they were supposed to/did not obey the rulesRedemption Defense: The assessments were flawed, but we fixed themTheories such as: Fukushima did happen - but not an actual ‘accident/meltdown’ - it basically withstood a tsunami when the country was flattenedResidents of Fukushima - they were told the plant was ‘safe’The relevance defense, Chernobyl, and 3 Mile IslandBoeing disasters, their risk assessments, and blameAt the time of Fukushima, Japanese regulation and engineering was regarded as superiorThis was not a Japanese reactor! It’s a U.S. designThe compliance defense, human errorThe redemption defense, regulatory bodies taking all Fukushima elements into accountDowner quotes Peanuts comics in the paper - lessons - Lucy can’t be trusted!This paper is not about what’s wrong with risk assessments- it’s about how we defend what we doTakeaways:Uncertainty is always present in risk assessmentsYou can never identify all failure modesThree things always missing: anticipating mistakes, anticipating how complex tech is always changing, anticipating all of the little plastic connectors that can breakAssumptions - be wary, check all the what-if scenariosJust because a regulator declares something safe, doesn’t mean it isAnswering our episode question: You must question risk assessments CONSTANTLY Quotes:“It’s a little bit surprising we don’t scrutinize the ‘control’ every time it fails.” - Drew“In the case of nuclear power, we’re in this awkward situation where, in order to prepare emergency plans, we have to contradict ourselves.” - Drew“If systems have got billions of potential ’billion to one’ accidents then it’s only expected that we’re going to see accidents from time to time.” - David“As the world gets more and more complex, then our parameters for these assessments need to become equally as complex.” - David“The mistakes that people make in these [risk assessments] are really quite consistent.” - Drew Resources:Disowning Fukushima Paper by John DownerWASH-1400 StudiesThe Safety of Work PodcastThe Safety of Work on LinkedInFeedback@safetyofwork
undefined
Oct 9, 2022 • 1h 3min

Ep. 100 Can major accidents be prevented?

Guest Charles B. Perrow discusses the inevitability of catastrophic accidents in complex systems. Key points include failures in unpredictable ways, bias against nuclear power, and the importance of operator response in disasters. Perrow's theory predicts multiple failures leading to a 'perfect storm' that is hard to prevent, highlighting the limitations of better technology in averting major accidents.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode