You are here: Home » Economy & Policy » News
Business Standard

Dump 'Statistical significance'

Critics argue that statistical significance can be misleading because it sets an arbitrary threshold on the level of uncertainty science should be willing to accept

Ariel Procaccia | Bloomberg 

A start-up is helping B-schools manage their placement process

Did you know that gorging on dark chocolate accelerates weight loss? A study published in 2015 found that a group of subjects who followed a low-carbohydrate diet and ate a bar of dark chocolate daily lost more weight than a group that followed the same diet sans chocolate. This discovery was heralded in some quarters as a scientific breakthrough.

If you’re still hesitant about raiding the supermarket chocolate aisle, rest assured: The study’s results are statistically significant. In theory, this means that the results would be improbable if chocolate did not contribute to weight loss, and therefore we can conclude that it does. A successful test of has long been the admission ticket into the halls of scientific knowledge.

But not anymore, if statisticians have their way. In a coordinated assault last week, which included a special issue of the American Statistician and commentary in Nature (supported by 800 signatories), some of the discipline’s luminaries urged scientists to ditch the notion of

Critics argue that can be misleading because it sets an arbitrary threshold on the level of uncertainty science should be willing to accept. Roughly speaking, uncertainty is expressed as the likelihood of observing an experimental result by chance, assuming the effect being tested doesn’t actually exist. In statistical lingo this likelihood is known as p-value. Statistical significance typically requires a p-value of less than 5 per cent, or 0.05. A p-value of 0.049 is under the 5-per cent threshold; thus results returning that value are considered “significant.” If p=0.051, by contrast, the results are “not significant,” despite the tiny difference between the two values.

This has led to myriad problems. One is that there’s a perceived crisis of reproducibility in science, in part because the p-value itself is uncertain: Flawlessly repeating the same experiment can produce different values, crossing the magical significance threshold in either direction. Another problem is the practice of (often innocently) testing many hypotheses and reporting only those that give statistically significant results.

The latter issue is nicely illustrated by the chocolate study, which was nothing but a sting operation designed to show how easy it is to draw international media attention to flashy results even when the underlying science is cringe-worthy. The experiment was real, but it had only 15 subjects. Worse, 18 different hypotheses were tested, including “chocolate reduces cholesterol” and “chocolate contributes to quality of sleep.” Life may very well be like a box of chocolates, but if you roll the dice enough times, you know exactly what you’re going to get: results that are both statistically significant and fallacious.

I agree that the term “statistical significance” is part of the problem; abandoning it is the right thing to do. In its place, statisticians advocate a more nuanced view of uncertainty. For example, scientists can report a range of possible conclusions that are compatible (to different degrees) with the data.

But the problem runs deeper. The broader issue is that the choice of a career in medicine, the life sciences or the social sciences (with some exceptions, like economics) isn’t typically indicative of a passion, or even an aptitude, for mathematics. Yet these sciences are thoroughly infused with statistics, and a shallow understanding of its principles gives rise to numerous fallacies.

In a 1994 editorial in the BMJ, the late English statistician wrote that many medical researchers “are not ashamed (and some seem proud) to admit that they don’t know anything about ” It does appear to be a sociological phenomenon.

To find examples of ignorance, one doesn’t even need to look for statistical subtleties. I was amused to read a few years ago of the “one in 48 million baby” who was born in Australia on the same date as her mother and her father. Under the assumptions presumably made by the good doctor who announced the miracle, the odds are actually 1 in 133,225 (1 in 365 squared, not 1 in 365 cubed). The same thing is likely to happen on any given day, somewhere in the world.

These anecdotes don’t amount to statistically significant (oops) evidence, but there are plenty of surveys showing widespread misuse of

First Published: Sat, March 30 2019. 00:58 IST
RECOMMENDED FOR YOU