Ƶ

The NNT: An Overhyped and Confusing Statistic

— Some pitfalls of summing up evidence with Numbers Needed to Treat.

Last Updated March 12, 2015
Ƶ MedicalToday
image

The Number Needed to Treat (NNT) is advocated with so much enthusiasm, you would think it was a A few weeks ago, a post in the New York Times even billed it as

But I believe the NNT really isn't suitable for communicating with patients. And that's not just because it takes data about patients, inverts it to a "treater" perspective, and then requires them to go through cognitive gymnastics to get back to their point of view. That's a big part -- but not all -- of the problem.

The NNT was invented in the belief that it would be easy to interpret and put the usefulness of a treatment It tells you that in a particular group of patients, assigned an intervention in a particular way, the average number of patients who benefited was 1 in x.

Here are some alternative ways of presenting this kind of data:

  1. Natural frequency (and event rate): 12 of 100 patients in a control group had heart attacks, but only 6 of 100 in a treatment group. (Heart attacks were prevented for 6% of patients.)
  2. Relative risk reduction (RRR) (6/12): The risk of heart attacks was cut by 50%.
  3. Absolute risk reduction (ARR) (12-6): The heart attack rate dropped from 12% to 6% (or 6% fewer patients had heart attacks).
  4. Number needed to treat (NNT) (inversion of the ARR: 100 ÷ 6): The number needed to treat to prevent 1 heart attack was 17.

Which of these is the hardest for patients and clinicians to understand? The evidence points to the NNT being the hardest. That's one of the conclusions of systematic reviews There isn't a powerful justification here for manipulating absolute risk data this way. And there should be no surprise, either, when a survey suggests less than half of clinicians consider themselves

Research on the effectiveness of communication isn't straightforward, though. So it leaves lots of wiggle room in interpreting it -- and room for people to discount it, too. Which is probably why so many supporters of evidence-based medicine can remain in favor of a form of communication that itself is, let's face it, pretty contrary to the evidence.

I used to spend a considerable part of my time working with research on the effectiveness of communication. From 1997 to 2001 I was the (foundation) Coordinating Editor for the I left that field, though, because the research wasn't as useful for me in practice as I needed it to be.

Communication interventions tend to be complex, with lots of confounders. And communication effectiveness research has a lot of studies done in unrepresentative people, making hypothetical decisions, in artificial circumstances. I, on the other hand, had to communicate with patients who didn't feel as obliged to keep paying attention to information as they probably did in a study.

Effectiveness research in communication has to be placed into context with knowledge gained -- and from knowledge of the basics of literacy and numeracy. And it's the basics of numeracy that strongly inform my prior assumptions looking at this kind of research.

One of the basics of weaker numeracy is that people struggle with converting numbers and Moving between numbers where the denominators are shifting is a kiss of death to accurate comprehension.

Yet shifting the frame of reference is the essential nature of the NNT. It flips the constant from a decimal denominator (like a percentage) to the numerator -- the 1 in x construct. That's more than people can generally manage. And you have to remember which way is up, because that switches too: a low NNT is good, whereas a low NNH is bad (Number Needed to Harm).

With that in mind, look at this example -- taken from the NNT website that's been promoted recently: There's 1 in 83, 1 in 28, 1 in 43, and more for that one intervention. The corresponding percentages for those? 1.2%, 3.6%, 2.3%.

Natural frequencies, event rates, and absolute risk differences don't make everything simple, of course. Especially when the differences are very small. But they are more straightforward. Being better understood may mean people can make decisions with closer fidelity to their values. Relative risks on their own without people's baselines in view lead to overdramatic impressions of results. They're essential for applying results to individuals, though. NNTs lead to somewhat underplayed impressions. And they're not so essential for interpreting results when relative and absolute risks are provided.

Statistics aren't value-free, even though they are more objective than many other ways of distilling information. They're difficult to translate for people who don't already "get" them very well in their original form. You need lots of context and several statistics to get a handle on the results of most clinical studies anyway. I can understand the desire to invent, or grasp onto, something new. But innovation isn't always progress, is it?

If you'd like to know how this sausage was made, check out to see my tally of trials of NNTs with just under 27,000 participants.

The cartoon in this post is my own (): more at .

is a senior clinical research scientist. She works at the National Institutes of Health as editor for the clinical effectiveness resource PubMed Health and as , PubMed's scientific publication commenting system. She is an academic editor at PLOS Medicine, and as well as on The thoughts Hilda Bastian expresses here at Third Opinion are personal, and do not necessarily reflect the views of the National Institutes of Health or the U.S. Department of Health and Human Services.

Check out this response from David H. Newman, MD, founder of theNNT.com, about how physicians should use the NNT and soothe angry birds' ruffled feathers.