A look behind headlines about medical errors as 3rd leading cause of death

Google Trends Image Vertical
Data Source: Google Trends

The recent spate of stories calling medical errors the third leading U.S. cause of death — triggered by an article from Johns Hopkins researchers — shows both the power and challenges of tracking patient safety. Our research director, Paul Karner, explains why such estimates vary widely — and may still understate the true dimensions of the problem.

In a recent article published in the journal The BMJ, Dr. Martin Makary and Michael Daniel of Johns Hopkins University School of Medicine characterized medical error as the third leading cause of death in the U.S., with more than a quarter-million deaths annually.

The article attracted substantial attention, providing both a boost to awareness of patient safety (as reflected in the number of internet searches for the term “medical error;” see chart above) and also a glimpse into the challenges of measuring harm.

As you may have read, the Makary/Daniel estimate was not actually based on new primary research. Instead, the authors extrapolated results from four earlier studies to all 35.4 million U.S. hospital admissions in 2013.

Four vastly different estimates

That yielded four different estimates of the number of deaths due to medical error in U.S. hospitals, ranging from 135,000 to 400,000. Makary and Daniel then averaged these four numbers, leading to a very precise-seeming estimate of 251,454 deaths from in-hospital errors annually. If accurate, they noted, this would place medical error at number three on the Centers for Disease Control and Prevention’s (CDC) list of the leading causes of death, behind only heart disease and cancer.

So far, so good. But why, an intelligent reader might ask, do such numbers vary so widely?

Read More down-arrow

The table below summarizes some of the factors. As you can see, the reports differed significantly in their methods, data sources, populations, and time periods, and each had its limitations. For example:

  • The HealthGrades study identified errors by applying Agency for Healthcare Research and Quality Patient Safety Indicators (PSIs) to Medicare data, but had to exclude several PSIs due to potential variations in hospital coding practices. Some authors have argued that PSIs yield too many false positives, but the question remains open — particularly with respect to lethal errors. And alternative methods, such as voluntary reporting or the Institute for Healthcare Improvement’s Global Trigger Tool (used by the other three studies), have also been criticized as prone to under-reporting or use of clinical judgment that varies among reviewers.
  • All the studies relied on medical records or billing data to identify errors. But the absence of evidence does not mean there was no error. Indeed, other research suggests that errors noted during real-time observation or through expanded clinical and administrative data are often not documented in the medical record.
  • Two of the four reports (HealthGrades and the Inspector General) focused on just one patient population, Medicare beneficiaries. Adults 65 and older account for about 75% of inpatient hospital deaths. Extrapolating from their experience to all patients could introduce bias if the two groups (Medicare vs. others) differ.
  • A similar issue arises in the Classen study, which focused on errors at three institutions selected in part because they were large, tertiary care teaching hospitals with “well-established operational patient safety programs.” These criteria could make the hospitals’ experience different from that of other hospitals, which would again skew the resulting estimate when applied to all admissions.
  • These and other problems illustrate why extrapolating these studies to all U.S. hospital admissions produces such a wide range of numbers. The variation is inherent in the limitations of our data on patient safety.
Third Leading Cause Table Vertical
Table: Why the four estimates of error-related deaths differed

But that doesn’t mean Makary and Daniel’s estimate should be ignored or discounted. On the contrary, there are reasons why their number may actually underestimate the impact of medical error.

For example, while Makary and Daniel focused on deaths among hospital inpatients, an even greater number of people die elsewhere, including at home, in long-term care facilities, or in the emergency room. The CDC says deaths in a hospital make up less than one-third of all deaths (about 715,000 out of 2.5 million in 2010). So the actual number of error-related deaths across all locations could well be higher too.

Few errors result in death

The Makary/Daniel data are also confined to errors that proved fatal, whereas most medical mistakes are not lethal. Another of the four reports, Landrigan et al, for example, found that only 2.4% of all harms identified in their study led to a death.

Yet many non-fatal errors cause serious harm. In that study, more than half of non-lethal harms resulted in prolonged hospitalization or worse, based on discharge records. And a 2014 survey of Massachusetts residents by the Harvard School of Public Health for the Betsy Lehman Center showed the same thing: nearly one in four (23 percent) said either they or someone close to them had experienced a preventable medical error during the previous five years, and that 59 percent of those errors resulted in serious harm.

Thus, both medical records and patients’ impressions point to a large number of non-fatal errors that were not the focus of the Makary/Daniel article but nonetheless cause serious harm.

So while it’s helpful to understand the limitations of these estimates, all of this work underscores what the Institute of Medicine highlighted in its 1999 report “To Err is Human:” the number of serious medical errors is too high, and much remains to be done to reduce patient harm in the health care system.

speech-bubble

We want to hear from you!

Email us your feedback and comments: patientsafetybeat@state.ma.us