Part I:
Ready, Aim, Judge
As a new patient rolls past the nurses’ station and into the intensive care unit, the transportation crew drops a thick packet of paper onto the counter with a sharp slap. The physician on duty eyeballs the new arrival while critical care nurses swap out monitor leads, check IVs, and document the patient’s every scar and blemish on an endless flow sheet. Seeing no urgent concerns, the physician turns his attention to the mound of paper that taunts him from the counter.
He sits at his workstation and flips through the packet like it’s the world’s longest restaurant menu, and he has no appetite. Every few seconds he pauses, rolls his eyes, and scoffs like he is trying to expel a nonexistent hairball. He finally pushes the stack of paper away having reached his conclusion: those fools at the transferring hospital sent us a mess.
“Thank God this poor guy is now at a hospital with real doctors,” he says and those within earshot cluck their agreement. After all, anyone looking over the patient’s weeks-long hospital course can see the mistakes. And those errors would have never occurred if the patient were in this ICU, right?
In healthcare, we often subject our colleagues to these ad hoc trials. After all, retrospection is the foundation of our training. We take a patient’s history; we piece together clues from the past to illuminate the present. And when we aim that laser-focus on a patient who has already received medical care, we inevitably unearth the flaws of our colleagues.
But is our interpretation of those prior actions an accurate assessment of the past? Or is it a cognitive trap, an illusion created by a minefield of biases?
And if hindsight itself is flawed, what undue damage are we causing in the name of a faulty principle? Have we created an environment of punitive collaboration? Have we opened the door to professional mistrust and toxic communication? Or something even worse?
Let’s look at two factors that set up the retrospective firing squad we so often perpetuate in medicine: a chronic overconfidence in our own predictive abilities and a cause-and-effect mentality that lends itself to high levels of hindsight bias.
It’s not me, it’s you…
Any time we look back at a decision and think, “Wow, that person is an idiot,” we are really saying, “I would have known better.” Of course, in order to think this, we first must make an assumption about the difficulty of the decision we are judging.
So, how good are we at predicting the outcomes of clinical events? And how much does our confidence in our predictions correlate with their accuracy?
It turns out: not very well at all.
In a 1993 study, researchers asked a group of physicians to predict hemodynamic measurements (pulmonary capillary wedge pressure, cardiac index, systemic vascular resistance) in a group of real patients prior to right heart catheterization. These physicians also ranked their confidence in their guesses prior to learning the real values.
Overall, the physicians did a poor job of predicting the right answers but, more importantly, their level of confidence in their guesses had no correlation with their accuracy—they felt just as good about their correct guesses as their incorrect ones. Experienced physicians were no better at estimating values than their rookie colleagues, but the experienced doctors were, of course, more confident in their predictions.
A 2013 study in JAMA found similar results when it presented physicians with clinical vignettes and asked for a diagnosis and a confidence rating in that diagnosis. As you might expect, diagnostic accuracy was much lower for the more difficult clinical cases. But physicians’ confidence in their predictions was uniform despite the case difficulty; they felt just as confident when they were right as when they were wrong.
Nurses seem to face the same problem. Nurses and nursing students presented with real clinical vignettes were asked to predict the likelihood of a dangerous event in each situation. Like our first example, although experienced nurses reported much higher levels of confidence in their answers, they were no better at predicting adverse events than their inexperienced counterparts.
This overconfidence across the spectrum of healthcare workers—especially experienced ones—marks a poor starting point from which to judge the behavior of others. It’s the first misstep in the logic trap of retrospective judgment.
But it isn’t the only mental trickery at play.
Now that you mention it…
Have you ever watched a movie with a friend and heard him say something like, “Man, that smudge on the screen is really bothering me?” Even if you had never noticed it before, the imperfection your friend pointed out is now impossible to ignore. No matter how you try to tell yourself you don’t see it, you can’t scrub the knowledge from your brain. It might ruin the entire film.
Knowledge of the present does the same thing when we try to evaluate events in hindsight. It is impossible to know how a past situation looked in the past because we can’t scrub the present from our mind.
What does that look like in a clinical context?
A 2010 study tested this concept in a group of radiologists. The physicians were shown brain CT scans of a group of patients who presented with ischemic stroke. The radiologists were told to interpret the images without a medical history or any subsequent diagnostic studies. They were specifically asked to estimate the chances an acute stroke was present and, if so, to identify the location of the stroke.
After a washout period, the studies were shuffled and shown to the same radiologists again. This time, however, they were told the presenting symptoms and given a follow-up MRI to read. How do you think they did? The radiologists were, of course, far better at finding strokes on CT scans when they had additional information and knew the outcome of a follow-up MRI.
But before you dismiss that as an obvious outcome, ask yourself how this scenario is any different than a clinician comparing the judgment of a colleague to his own judgment an hour, or a day, or a week later. The passage of time inevitably yields additional information that, in turn, increases the likelihood of a “good decision.”
Combine this advantage with the fact that we chronically overestimate our own predictive ability, and we have created a playing field that, in retrospect, could not possibly do justice to our colleagues of the past.
But is there yet another cognitive effect tilting the scales against our judgments of yesterday? And does the psychology behind it call into question one of our oldest institutions?
Part II:
Trials and Manipulations
If you asked a room full of clinicians to write down all of their fears, you would probably find an odd and variable assortment of horrors. But on nearly every list you would find some consequence of being wrong: disdain from colleagues, harm to a patient, or litigation in a malpractice lawsuit. These are the lurking specters that fill healthcare nightmares.
The fear of punitive consequences doesn’t just affect our actions, it has spawned an entire medical malpractice industry. But is the very concept of these judicial systems based on a fallacy?
In Part I of “Hindsight is Blind,” we discussed the concept of retrospective judgment in medicine. We showed how it is fueled by a chronic overconfidence in our (largely inaccurate) predictive ability and a lack of appreciation for how new information enhances our decision-making ability. These factors often (mis)guide us to the determination that our colleagues made a mistake, one that we would never make ourselves.
But even after we have reached this (often flawed) conclusion, cognitive pitfalls continue to hound us.
But did you die?
Imagine you are watching the same movie with your friend from Part I. Having forgiven him for the maddening “smudge” comment, you’re enjoying an action-packed cinematic adventure. But just as the film is about to reach its epic conclusion, your friend stands up and unplugs the television. The screen zaps to black. Adventure over.
After your inevitable popcorn-throwing tirade, think back to the actions of the film’s protagonist. Were they righteous and rational? Were they the logical step in the context of the situation?
Without seeing the outcome, you might be hesitant to judge the decision to jump a muscle car off of an impromptu ramp on a busy city street. Or give up a promotion to pursue a love interest. Or charge headlong into a gang leader’s lair.
Knowing if the plan was a spectacular success or a blood-spattered failure might change how you feel about it. And this tendency to let the final result determine how we judge a decision has a name: outcome bias.
So how does this translate from the silver screen to the hospital wards? I bet you guessed we have a study for that.
A 1991 experiment presented a group of 112 practicing anesthesiologists with two sets of clinical cases. These groups of cases were identical except for the outcome: some, the anesthesiologists were told, resulted in a temporary injury to the patient while the others resulted in a permanent injury. With the outcome in mind, the anesthesiologists were asked to evaluate the appropriateness of the care each patient received.
For the same clinical cases, anesthesiologists stated the care was appropriate much more often when they thought the clinical outcome was only a temporary injury. The opposite was also true; the physicians were much less likely to approve of care when they thought it had resulted in a permanent injury. Since the actual care itself never differed in these situations, it seems knowledge of the end result played a heavy role in how physicians felt about their colleagues’ care.
To not err is human
This sliding scale of judging medical error seemed to rub Dr. Robert McNutt the wrong way. McNutt is an oncologist and the former Associate Director of Medical Informatics and Patient Safety Research at Rush University. Back in 2005, he decided to examine cases from something called the WebM&M, an anonymous, online morbidity and mortality conference presented by the Agency for Healthcare Research and Quality (AHRQ).
Unlike prior reviewers of these cases, however, McNutt and his team blinded themselves to both the diagnosis and the patient’s outcome.
“Cases always look different after an adverse event has occurred,” he said in an interview with Today’s Hospitalist. “All the criteria for defining errors and mistakes in medicine are hindsight-biased.”
His results resembled those of our outcome-focused anesthesiologists: for nearly all of the cases, removing the outcome –and the predetermination of error— led the evaluating physicians to either agree with the care performed or to determine no medical error had actually occurred. His work led him to the conclusion that the quest to root out medical errors itself may have been misguided.
“Applying these terms (like ‘mistake’ or ‘error’) is based on judgment,” he said in a follow-up commentary. “And we believe that these judgments are often flawed.”
This whole court is out of order!
We’ve already explored the cornucopia of cognitive biases that affect retrospective judgment at every turn. But far more variables and biases come into play when we throw these concepts into another setting, the courtroom.
The mere presence of a defendant in a courtroom implies a negative outcome and establishes an outcome bias. After all of the facts are divulged in the discovery process, everyone is like those radiologists with their MRI results in hand: hindsight biased. Since most lawyers, judges, witnesses, and juries are human, they are subject to a starting point of misguided overconfidence as they overestimate their own abilities to have seen the outcome coming.
Now throw in a commissioning bias as paid witnesses subconsciously attempt to support the side that hired them. Then add all of the implicit biases that affect how we feel about other people: their occupation, race, gender, age, and appearance.
The sum of these effects suggests that the social construct of medical negligence litigation is, at best, founded on the shakiest of psychological ground and, at worst, akin to judicial voodoo.
There are, of course, ways to buffer the effects of retrospective biases. The first, and perhaps most important, is the widespread acknowledgement that they exist. Only then can we hope to protect the past from the mistakes of the present.