The Safety of Work cover image

The Safety of Work

Latest episodes

undefined
Feb 5, 2023 • 44min

Ep. 105 How can organisations learn faster?

You’ll hear a little about Schein’s early career at Harvard and MIT, including his Ph.D. work – a paper on the experience of POWs during wartime contrasted against the indoctrination of individuals joining an organization for employment. Some of Schein’s 30-year-old concepts that are now common practice and theory in organizations, such as “psychological safety” Discussion Points:A brief overview of Schein’s career, at Harvard and MIT’s School of Management and his fascinating Ph.D. on POWs during the Korean WarA bit about the book, Humble InquiryDigging into the paperThree types of learning:Knowledge acquisition and insight learningHabits and skillsEmotional conditioning and learned anxietyPractical examples and the metaphor of Pavlov’s dogCountering Anxiety I with Anxiety IIThree processes of ‘unfreezing’ an organization or individual to change:DisconfirmationCreation of guilt or anxietyPsychological safetyMistakes in organizations and how they respondThere are so many useful nuggets in this paperSchein’s solutions: Steering committees/change teams/groups to lead the organizations and manage each other’s anxietyTakeaways:How an organization deals with mistakes will determine how change happensAssessing levels of fear and anxietyKnow what stands in your way if you want progressAnswering our episode question: How can organizations learn faster? 1) Don't make people afraid to enter the green room. 2) Or make them more afraid to stand on the black platform. Quotes:“...a lot of people credit [Schein] with being the granddaddy of organizational culture.” - Drew“[Schein] says .. in order to learn skills, you've got to be willing to be temporarily incompetent, which is great if you're learning soccer and not so good if you're learning to run a nuclear power plant.” - Drew“Schein says quite clearly that punishment is very effective in eliminating certain kinds of behavior, but it's also very effective in inducing anxiety when in the presence of the person or the environment that taught you that lesson.” - Drew“We've said before that we think sometimes in safety, we're about three or four decades behind some of the other fields, and this might be another example of that.” - David“Though curiosity and innovation are values that are praised in our society, within organizations and particularly large organizations, they're not actually rewarded.” - Drew Resources:Link to the paperHumble Inquiry by Edgar ScheinThe Safety of Work PodcastThe Safety of Work on LinkedInFeedback@safetyofwork
undefined
Jan 22, 2023 • 46min

Ep. 104 How can we get better at using measurement?

You’ll hear some dismaying statistics around the validity of research papers in general, some comments regarding the peer review process, and then we’ll dissect each of six questions that should be asked BEFORE you design your research. The paper’s abstract reads:In this article, we define questionable measurement practices (QMPs) as decisions researchers make that raise doubts about the validity of the measures, and ultimately the validity of study conclusions. Doubts arise for a host of reasons, including a lack of transparency, ignorance, negligence, or misrepresentation of the evidence. We describe the scope of the problem and focus on how transparency is a part of the solution. A lack of measurement transparency makes it impossible to evaluate potential threats to internal, external, statistical-conclusion, and construct validity. We demonstrate that psychology is plagued by a measurement schmeasurement attitude: QMPs are common, hide a stunning source of researcher degrees of freedom, and pose a serious threat to cumulative psychological science, but are largely ignored. We address these challenges by providing a set of questions that researchers and consumers of scientific research can consider to identify and avoid QMPs. Transparent answers to these measurement questions promote rigorous research, allow for thorough evaluations of a study’s inferences, and are necessary for meaningful replication studies. Discussion Points:The appeal of the foundational question, “are we measuring what we think we’re measuring?”Citations of studies - 40-93% of studies lack evidence that the measurement is validPsychological research and its lack of defining what measures are used, and the validity of their measurement, etc.The peer review process - it helps, but can’t stop bad research being publishedWhy care about this issue? Lack of validity- the research answer may be the oppositeDesigning research - like choosing different paths through a gardenThe six main questions to avoid questionable measurement practices (QMPs)What is your construct? Why/how did you select your measure?What measure to operationalize the construct?How did you quantify your measure?Did you modify the scale? How and why?Did you create a measure on the fly? Takeaways:Expand your methods section in research papersAsk these questions before you design your researchAs research consumers, we can’t take results at face valueAnswering our episode question: How can we get better? Transparency is the starting point. Resources:Link to the paperThe Safety of Work PodcastThe Safety of Work on LinkedInFeedback@safetyofwork
undefined
Dec 4, 2022 • 1h 1min

Ep. 103 Should we be happy when our people speak out about safety?

In concert with the paper, we’ll focus on two major separate but related Boeing 737 accidents: Lyon Air #610 in October 2018 - The plane took off from Jakarta and crashed 13 mins later, with one of the highest death tolls ever for a 737 crash - 189 souls.Ethiopian Airlines #30 in March 2019 - This plane took off from Addis Ababba and crashed minutes into takeoff, killing 157. The paper’s abstract reads:Following other contributions about the MAX accidents to this journal, this paper explores the role of betrayal and moral injury in safety engineering related to the U.S. federal regulator’s role in approving the Boeing 737MAX—a plane involved in two crashes that together killed 346 people. It discusses the tension between humility and hubris when engineers are faced with complex systems that create ambiguity, uncertain judgements, and equivocal test results from unstructured situations. It considers the relationship between moral injury, principled outrage and rebuke when the technology ends up involved in disasters. It examines the corporate backdrop against which calls for enhanced employee voice are typically made, and argues that when engineers need to rely on various protections and moral inducements to ‘speak up,’ then the ethical essence of engineering—skepticism, testing, checking, and questioning—has already failed. Discussion Points:Two separate but related air disastersThe Angle of Attack Sensor MCAS (Maneuvering Characteristics Augmentation System) on the Boeing 737Criticality rankingsThe article - Joe Jacobsen, an engineer/whistleblower who came forwardThe claim is that engineers need more moral courage/convictions and training in ethicsDefining moral injury Engineers - the Challenger accident, the Hyatt collapseDisaster literacy – check out the old Disastercast podcastHumility and hubrisRegulatory bodies and their issuesSolutions and remediesRisk assessmentsOther examples outside of BoeingTakeaways:Profit vs. risk, technical debtDon’t romanticize ethicsInternal emails can be your downfallRewards, accountability, incentivesLook into the engineering resourcesAnswering our episode question: In this paper, it's a sign that things are bad. Quotes:“When you develop a new system for an aircraft, one of the first safety things you do is you classify them according to their criticality.” - Drew“Just like we tend to blame accidents on human error, there’s a tendency to push ethics down to that front line.” - Drew“There’s this lasting psychological/biological behavioral, social or even spiritual impact of either perpetrating, or failing to prevent, or bearing witness to, these acts that transgress our deeply held moral beliefs and expectations.” - David“Engineers are sort of taught to think in these binaries, instead of complex tradeoffs, particularly when it comes to ethics.” - Drew“Whenever you have this whistleblower protection, you’re admitting that whistleblowers are vulnerable.” - Drew“Engineers see themselves as belonging to a company, not to a profession, when they’re working.” - Drew Resources:Link to the paperThe Safety of Work PodcastThe Safety of Work on LinkedInFeedback@safetyofwork
undefined
Nov 15, 2022 • 42min

Ep. 102 What's the right strategy when we can't manage safety as well as we'd like to?

The paper’s abstract reads:Healthcare systems are under stress as never before. An aging population, increasing complexity and comorbidities, continual innovation, the ambition to allow unfettered access to care, and the demands on professionals contrast sharply with the limited capacity of healthcare systems and the realities of financial austerity. This tension inevitably brings new and potentially serious hazards for patients and means that the overall quality of care frequently falls short of the standard expected by both patients and professionals. The early ambition of achieving consistently safe and high-quality care for all has not been realised and patients continue to be placed at risk. In this paper, we ask what strategies we might adopt to protect patients when healthcare systems and organisations are under stress and simply cannot provide the standard of care they aspire to. Discussion Points:Extrapolating out from the healthcare focus to other businessesThis paper was published pre-pandemicAdaptations during times of extreme stress or lack of resources - team responses will varyPeople under pressure adapt, and sometimes the new conditions become the new normalGuided adaptability to maintain safetySubstandard care in French hospitals in the studyThe dynamic adjustment for times of crisis vs. long-term solutionsShort-term adaptations can impede development of long-term solutionsFour basic principles in the paper:Giving up hope of returning to normalWe can never eliminate all risks and threatsPrincipal focus should be on expected problemsManagement of risk requires engagement and action at all managerial levelsGriffith university’s rules on asking for an extension…expected surprisesMiddle management liaising between frontlines and executivesManaging operations in “degraded mode” and minimum equipment listsAbsolute safety - we can’t aim for 100% - we need to write in what “second best” coversTakeaways:Most industries are facing more pressure today than in the past, focus on the current risksAll industries have constant risks and tradeoffs - how to address at each levelUnderstand how pressures are being faced by teams, what adaptations are acceptable for short and long term?For expected conditions and hazards, what does “second best” look like?Research is needed around “degraded operations”Answering our episode question: The wrong answer is to only rely on the highest standards which may not be achievable in degraded operations Quotes:“I think it’s a good reflection for professionals and organistions to say, “Oh, okay - what if the current state of stress is the ‘new normal’ or what if things become more stressed? Is what we’re doing now the right thing to be doing?” - David“There is also the moral injury when people who are in a ‘caring’ profession and they can’t provide the standard of care that they believe to be right standard.” - Drew“None of these authors share how often these improvised solutions have been successful or unsuccessful, and these short-term fixes often impede the development of longer-term solutions.” - David“We tend to set safety up almost as a standard of perfection that we don’t expect people to achieve all the time, but we expect those deviations to be rare and correctable.” - Drew Resources:The Safety of Work PodcastThe Safety of Work on LinkedInFeedback@safetyofwork
undefined
Oct 30, 2022 • 1h 1min

Ep. 101 When should incidents cause us to question risk assessments?

The paper’s abstract reads:This paper reflects on the credibility of nuclear risk assessment in the wake of the 2011 Fukushima meltdown. In democratic states, policymaking around nuclear energy has long been premised on an understanding that experts can objectively and accurately calculate the probability of catastrophic accidents. Yet the Fukushima disaster lends credence to the substantial body of social science research that suggests such calculations are fundamentally unworkable. Nevertheless, the credibility of these assessments appears to have survived the disaster, just as it has resisted the evidence of previous nuclear accidents. This paper looks at why. It argues that public narratives of the Fukushima disaster invariably frame it in ways that allow risk-assessment experts to “disown” it. It concludes that although these narratives are both rhetorically compelling and highly consequential to the governance of nuclear power, they are not entirely credible. Discussion Points:Following up on a topic in episode 100 - nuclear safety and risk assessmentThe narrative around planes, trains, cars and nuclear - risks vs. safetyPlanning for disaster when you’ve promised there’s never going to be a nuclear disasterThe 1975 WASH-1400 StudiesJapanese disasters in the last 100 yearsFour tenets of Downer’s paper:The risk assessments themselves did not fail Relevance Defense: The failure of one assessment is not relevant to the other assessmentsCompliance Defense: The assessments were sound, but people did not behave the way they were supposed to/did not obey the rulesRedemption Defense: The assessments were flawed, but we fixed themTheories such as: Fukushima did happen - but not an actual ‘accident/meltdown’ - it basically withstood a tsunami when the country was flattenedResidents of Fukushima - they were told the plant was ‘safe’The relevance defense, Chernobyl, and 3 Mile IslandBoeing disasters, their risk assessments, and blameAt the time of Fukushima, Japanese regulation and engineering was regarded as superiorThis was not a Japanese reactor! It’s a U.S. designThe compliance defense, human errorThe redemption defense, regulatory bodies taking all Fukushima elements into accountDowner quotes Peanuts comics in the paper - lessons - Lucy can’t be trusted!This paper is not about what’s wrong with risk assessments- it’s about how we defend what we doTakeaways:Uncertainty is always present in risk assessmentsYou can never identify all failure modesThree things always missing: anticipating mistakes, anticipating how complex tech is always changing, anticipating all of the little plastic connectors that can breakAssumptions - be wary, check all the what-if scenariosJust because a regulator declares something safe, doesn’t mean it isAnswering our episode question: You must question risk assessments CONSTANTLY Quotes:“It’s a little bit surprising we don’t scrutinize the ‘control’ every time it fails.” - Drew“In the case of nuclear power, we’re in this awkward situation where, in order to prepare emergency plans, we have to contradict ourselves.” - Drew“If systems have got billions of potential ’billion to one’ accidents then it’s only expected that we’re going to see accidents from time to time.” - David“As the world gets more and more complex, then our parameters for these assessments need to become equally as complex.” - David“The mistakes that people make in these [risk assessments] are really quite consistent.” - Drew Resources:Disowning Fukushima Paper by John DownerWASH-1400 StudiesThe Safety of Work PodcastThe Safety of Work on LinkedInFeedback@safetyofwork
undefined
Oct 9, 2022 • 1h 3min

Ep. 100 Can major accidents be prevented?

Guest Charles B. Perrow discusses the inevitability of catastrophic accidents in complex systems. Key points include failures in unpredictable ways, bias against nuclear power, and the importance of operator response in disasters. Perrow's theory predicts multiple failures leading to a 'perfect storm' that is hard to prevent, highlighting the limitations of better technology in averting major accidents.
undefined
Sep 18, 2022 • 48min

Ep.99 When is dropping tools the right thing to do for safety?

The paper’s abstract reads: The failure of 27 wildland firefighters to follow orders to drop their heavy tools so they could move faster and outrun an exploding fire led to their death within sight of safe areas. Possible explanations for this puzzling behavior are developed using guidelines proposed by James D. Thompson, the first editor of the Administrative Science Quarterly. These explanations are then used to show that scholars of organizations are in analogous threatened positions, and they too seem to be keeping their heavy tools and falling behind. ASQ's 40th anniversary provides a pretext to reexamine this potentially dysfunctional tendency and to modify it by reaffirming an updated version of Thompson's original guidelines. The Mann Gulch fire was a wildfire in Montana where 15 smokejumpers approached the fire to begin fighting it, and unexpected high winds caused the fire to suddenly expand. This "blow-up" of the fire covered 3,000 acres (1,200 ha) in ten minutes, claiming the lives of 13 firefighters, including 12 of the smokejumpers. Only three of the smokejumpers survived. The South Canyon Fire was a 1994 wildfire that took the lives of 14 wildland firefighters on Storm King Mountain, near Glenwood Springs, Colorado, on July 6, 1994. It is often also referred to as the "Storm King" fire. Discussion Points:Some details of the Mann Gulch fire deaths due to refusal to drop their tools Weich lays out ten reasons why these firefighters may have refused to drop their tools:Couldn't hear the orderLack of explanation for order - unusual, counterintuitiveYou don’t trust the leaderControl- if you lose your tools, lose capability, not a firefighterSkill at dropping tools - ie survivor who leaned a shovel against a tree instead of droppingSkill with replacement activity - it’s an unfamiliar situationFailure - to drop your tools, as a firefighter,  is to failSocial dynamics - why would I do it if others are notConsequences - if people believe it won’t make a difference, they won’t drop.These men should have been shown the difference it would makeIdentity- being a firefighter, without tools they are throwing away their identity.  This was also shortly after WWII, where you are a coward if you throw away your weapons, and would be alienated from your groupThomson had four principles necessary for research in his publication: Administrative science should focus on relationships - you can’t understand without structures and people and variables. Abstract concepts - not on single concrete ideas, but theories that apply to the fieldDevelopment of operational definitions that bridge concepts and raw experience - not vague fluffy things with confirmation bias - sadly, we still don’t have all the definitions todayValue of the problem - what do they mean? What is the service researchers are trying to provide? How Weick applies these principles to the ten reasons, then looks at what it means for researchersWeick’s list of ten- they are multiple, interdependent reasons – they can all be true at the same timeThompsons list of four, relating them to Weick’s ten, in today’s organizationsWhat are the heavy tools that we should get rid of? Weick links heaviest tools with identityDrew’s thought - getting rid of risk assessments would let us move faster, but people won’t drop them, relating to the ten reasons aboveTakeaways: 1) Emotional vs. cognitive  (did I hear that, do I know what to do) emotional (trust, failure, etc.) in individuals and teams2) Understanding group dynamics/first person/others to follow - the pilot diversion story, Piper Alpha oil rig jumpers, first firefighter who drops tools. Next week is episode 100 - we’ve got a plan! Quotes:“Our attachment to our tools is not a simple, rational thing.” - Drew“It’s really hard to recognize that you’re well past that point where success is not an option at all.” - Drew“These firefighters were several years since they’d been in a really raging, high-risk fire situation…” - David“I encourage anyone to read Weick’s papers, they’re always well-written.” - David“Well, I think according to Weick, the moment you begin to think that dropping your tools is impossible and unthinkable, that might be the moment you actually have to start wondering why you’re not dropping your tools.” - Drew“The heavier the tool is, the harder it is to drop.” - Drew Resources:Karl Weick - Drop Your Tools PaperThe Safety of Work PodcastThe Safety of Work on LinkedInFeedback@safetyofwork
undefined
13 snips
Sep 4, 2022 • 59min

Ep.98 What can we learn from the Harwood experiments?

In 1939, Alfred Marrow, the managing director of the Harwood Manufacturing Corporation factory in Virginia, invited Kurt Lewin (a German-American psychologist, known as one of the modern pioneers of social, organizational, and applied psychology in the U.S.to come to the textile factory to discuss significant problems with productivity and turnover of employees. The Harwood study is considered the first experiment of group decision-making and self-management in industry and the first example of applied organizational psychology. The Harwood Experiment was part of Lewin's continuing exploration of participatory action research. In this episode David and Drew discuss the main areas covered by this research: Group decision-makingSelf-managementLeadership trainingChanging people’s thoughts about stereotypesOvercoming resistance to change It turns out that yes, Lewin identified many areas of the work environment that could be improved and changed with the participation of management and members of the workforce communicating with each other about their needs and wants.This was novel stuff in 1939, but proved to be extremely insightful and organizations now utilize many of this experiment’s tenets 80 years later.  Discussion Points:Similarities in this study compared to the Chicago Western Electric “Hawthorne experiments”Organizational science – Lewin’s approachHow Lewin came to be invited to the Virginia factory and the problems they needed to solveAutocratic vs. democratic - studies of school children’s performanceThe setup of the experiment - 30 minute discussions several times a week with four cohortsThe criticisms and nitpicks around the study participantsGroup decision makingSelf-management and field theoryHarwood leaders were appointed for tech knowledge, not people skillsThe experiment held “clinics” where leaders could bring up their issues to discussChanging stereotypes - the factory refused to hire women over 30 - but experimented by hiring a group for this studyPresenting data does not work to change beliefs, but stories and discussions doResistance to change - changing workers’ tasks without consulting them on the changes created bitterness and lack of confidenceThe illusion of choice lowers resistanceThe four cohorts:Control group - received changes as they normally would - just ‘being told’Group received more detail about the changes, members asked to represeet the group with managementGroup c and d participated in voting for the changes, their productivity was the only one that increased– 15%This was an atypical factory/workforce to begin with, that already had a somewhat participatory approachTakeaways:Involvement in the discussion of change vs. no involvementSelf-management - setting own goals Leadership needs more than technical competenceStereotypes - give people space to express views, they may join the group majority in voting the other wayResistance to change - if people can contribute and participate, confidence is increasedFocus on group modifications, not individualsMore collaborative, less autocraticDoing this kind of research is not that difficult, you don’t need university-trained researchers, just people with a good mind for research ideas/methods Quotes:“The experiments themselves were a series of applied research studies done in a single manufacturing facility in the U.S., starting in 1939.” - David“Lewin’s principal for these studies was…’no research without action, and no action without research,’ and that’s where the idea of action research came from…each study is going to lead to a change in the plant.” - Drew“It became clear that the same job was done very differently by different people.” - David“This is just a lesson we need to learn over and over and over again in our organizations, which is that you don’t get very far by telling your workers what to do without listening to them.” - Drew“With 80 years of hindsight it's really hard to untangle the different explanations for what was actually going on here.” - Drew“Their theory was that when you include workers in the design of new methods…it increases their confidence…it works by making them feel like they’re experts…they feel more confident in the change.” - Drew  Resources:The Practical Theorist: Life and Work of Kurt Lewin by Alfred MarrowThe Safety of Work PodcastThe Safety of Work on LinkedInFeedback@safetyofwork
undefined
Aug 21, 2022 • 53min

Episode 97: Should we link safety performance to bonus pay?

This was very in-depth research within a single organization, and the survey questions it used were well-structured.  With 48 interviews to pull from, it definitely generated enough solid data to inform the paper’s results and make it a valuable study.We’ll be discussing the pros and cons of linking safety performance to monetary bonuses, which can often lead to misreporting, recategorizing, or other “perverse” behaviors regarding safety reporting and metrics, in order to capture that year-end dollar amount, especially among mid-level and senior management. Discussion Points:Do these bonuses work as intended?Oftentimes profit sharing within a company only targets senior management teams, at the expense of the front-line employeesIf safety and other measures are tied monetarily to bonuses, organizations need to spend more than a few minutes determining what is being measuredBonuses – do they really support safety? They don’t prevent accidents“What gets measured gets managed” OR “What gets measured gets manipulated”Supervisors and front-line survey respondents did not understand how metrics were used for bonuses87% replied that the safety measures had limited or negative effectNearly half said the bonus structure tied to safety showed that the organization felt safety was a priorityNothing negative was recorded by the respondents in senior management- did they believe this is a useful tool?Most organizations have only 5% or less performance tied to safetyDavid keeps giving examples in the hopes that Drew will agree that at least one of them is a good ideaDrew has “too much faith in humanity” around reporting and measuring safety in these organizationsTry this type of survey in your own organization and see what you find Quotes:“I’m really mixed, because I sort of agree on principle, but I disagree on any practical form.” - Drew“I think there’s a challenge between the ideals here and the practicalities.” - David“I think sometimes we can really put pretty high stakes on pretty poorly thought out things, we oversimplify what we’re going to measure and reward.” - Drew“If you look at the general literature on performance bonuses, you see that they cause trouble across the board…they don’t achieve their purposes…they cause senior executives to do behaviors that are quite perverse.” - Drew“I don’t like the way they’ve written up the analysis I think that there’s some lost opportunity due to a misguided desire to be too statistically methodical about something that doesn’t lend itself to the statistical analysis.” - Drew“If you are rewarding anything, then my view is that you’ve got to have safety alongside that if you want to signal an importance there.” - David Resources:Link to the PaperThe Safety of Work PodcastThe Safety of Work on LinkedInFeedback@safetyofwork
undefined
Jul 31, 2022 • 1h 1min

Episode 96: Why should we be cautious about too much clarity?

Just because concepts, theories, and opinions are useful and make people feel comfortable, doesn’t mean they are correct.  No one so far has come up with an answer in the field of safety that proves, “this is the way we should do it,” and in the work of safety, we must constantly evaluate and update our practices, rules, and recommendations. This of course means we can never feel completely comfortable – and humans don’t like that feeling.  We’ll dig into why we should be careful about feeling a sense of “clarity” and mental ease when we think that we understand things completely- because what happens if someone is deliberately making us feel that a problem is “solved”...? The paper we’re discussing deals with a number of interesting psychological constructs and theories. The abstract reads: The feeling of clarity can be dangerously seductive. It is the feeling associated with understanding things. And we use that feeling, in the rough-and-tumble of daily life, as a signal that we have investigated a matter sufficiently. The sense of clarity functions as a thought-terminating heuristic. In that case, our use of clarity creates significant cognitive vulnerability, which hostile forces can try to exploit. If an epistemic manipulator can imbue a belief system with an exaggerated sense of clarity, then they can induce us to terminate our inquiries too early — before we spot the flaws in the system. How might the sense of clarity be faked? Let’s first consider the object of imitation: genuine understanding. Genuine understanding grants cognitive facility. When we understand something, we categorize its aspects more easily; we see more connections between its disparate elements; we can generate new explanations; and we can communicate our understanding. In order to encourage us to accept a system of thought, then, an epistemic manipulator will want the system to provide its users with an exaggerated sensation of cognitive facility. The system should provide its users with the feeling that they can easily and powerfully create categorizations, generate explanations, and communicate their understanding. And manipulators have a significant advantage in imbuing their systems with a pleasurable sense of clarity, since they are freed from the burdens of accuracy and reliability. I offer two case studies of seductively clear systems: conspiracy theories; and the standardized, quantified value systems of bureaucracies.  Discussion Points:This has been our longest break from the podcastDavid traveled to the USUncertainty can make us risk-averseOrganizations strive for more certainty in the workplaceScimago for evaluating research papersA well-written paper, but not peer-evaluated by psychologistsFocus on conspiracy theories and bureaucracyThe Studio C comedy sketch - bank robbers meet a philosopherAcademic evaluations - white men vs. minorities/womenPuzzles and pleasure spikesClarity as a thought terminatorEpistemic intimidation and epistemic seductionCognitive Fluency, Insight, and Cognitive FacilityAlthough fascinating, there is no evidence to support the paper’s claimsEcho chambers and thought bubblesRush Limbaugh and Fox News - buying into the belief systemNumbers, graphs, charts, grades, tables – all make us feel comfort and controlTakeaways:Just because it’s useful, doesn’t mean it’s correctThe world is not supposed to make sense, it’s important to live with some cognitive discomfortBe cautious about feeling safe and comfortableConstant evaluation of safety practices must be the norm Resources:Link to the PaperThe Safety of Work PodcastThe Safety of Work on LinkedInFeedback@safetyofwork

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode