RESEARCH & RESOURCES

Choosing the Right Data to Analyze

How do you know you're selecting the right data samples to investigate? The lucky and unlucky subjects of your study won't stay exceptional forever, a phenomenon known as regression to the mean. Alex Reinhart, a Carnegie Mellon University statistics instructor, explains the impact this observation has on your analytics results.

[Editor’s note: “Regression to the Mean” is excerpted with permission from the publisher, No Starch Press. From Statistics Done Wrong by Alex Reinhart. Copyright © 2015. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher. A complete list of citations can be found in the book.]

Imagine tracking some quantity over time: the performance of a business, a patient's blood pressure, or anything else that varies gradually with time. Now pick a date and select all the subjects that stand out: the businesses with the highest revenues, the patients with the highest blood pressures, and so on. What happens to those subjects the next time we measure them?

Well, we've selected all the top-performing businesses and patients with chronically high blood pressure. But we've also selected businesses having an unusually lucky quarter and patients having a particularly stressful week. These lucky and unlucky subjects won't stay exceptional forever; measure them again in a few months, and they'll be back to their usual performance.

This phenomenon, called regression to the mean, isn't some special property of blood pressures or businesses. It's just the observation that luck doesn't last forever. On average, everyone's luck is average.

Francis Galton observed this phenomenon as early as 1869. While tracing the family trees of famous and eminent people, he noticed that the descendants of famous people tended to be less famous. Their children may have inherited the great musical or intellectual genes that made their parents so famous, but they were rarely as eminent as their parents. Later investigation revealed the same behavior for heights: unusually tall parents had children who were more average, and unusually short parents had children who were usually taller.

Returning to the blood pressure example, suppose I pick out patients with high blood pressure to test an experimental drug. There are several reasons their blood pressure might be high, such as bad genes, a bad diet, a bad day, or even measurement error. Though genes and diet are fairly constant, the other factors can cause someone's measured blood pressure to vary from day to day. When I pick out patients with high blood pressure, many of them are probably just having a bad day or their blood pressure cuff was calibrated incorrectly.

And while your genes stay with you your entire life, a poorly calibrated blood pressure cuff does not. For those unlucky patients, their luck will improve soon enough, regardless of whether I treat them or not. My experiment is biased toward finding an effect, purely by virtue of the criterion I used to select my subjects. To correctly estimate the effect of the medication, I need to randomly split my sample into treatment and control groups. I can claim the medication works only if the treatment group has an average blood pressure improvement substantially better than the control group's.

Another example of regression to the mean is test scores. In the chapter on statistical power, I discussed how random variation is greater in smaller schools, where the luck of an individual student has a greater effect on the school's average results. This also means that if we pick out the best-performing schools—those that have a combination of good students, good teachers, and good luck—we can expect them to perform less well next year simply because good luck is fleeting. As is bad luck: the worst schools can expect to do better next year—which might convince administrators that their interventions worked, even though it was really only regression to the mean.

A final, famous example dates back to 1933, when the field of mathematical statistics was in its infancy. Horace Secrist, a statistics professor at Northwestern University, published The Triumph of Mediocrity in Business, which argued that unusually successful businesses tend to become less successful and unsuccessful businesses tend to become more successful: proof that businesses trend toward mediocrity. This was not a statistical artifact, he argued, but a result of competitive market forces. Secrist supported his argument with reams of data and numerous charts and graphs and even cited some of Galton's work in regression to the mean. Evidently, Secrist did not understand Galton's point.

Secrist's book was reviewed by Harold Hotelling, an influential mathematical statistician, for the Journal of the American Statistical Association. Hotelling pointed out the fallacy and noted that one could easily use the same data to prove that business trend away from mediocrity: instead of picking the best businesses and following their decline over time, track their progress from before they became the best. You will invariably find that they improve. Secrist's arguments "really prove nothing more than that the ratios in question have a tendency to wander about."

Alex Reinhart is a statistics instructor and Ph.D. student at Carnegie Mellon University. He received his BS in physics at the University of Texas at Austin and does research on locating radioactive devices using statistics and physics.

TDWI Membership

Get immediate access to training discounts, video library, research, and more.

Find the right level of Membership for you.