Outliers are one of those realities of data analysis that no one can avoid.
Those pesky extreme values cause biased parameter estimates, non-normality in otherwise beautifully normal variables, and inflated variances.
Everyone agrees that outliers cause trouble with parametric analyses. But not everyone agrees that they’re always a problem, or what to do about them even if they are.
Sometimes a nonparametric or robust alternative is available — and sometimes not.
There are a number of approaches in statistical analysis for dealing with outliers and the problems they create. It’s common for committee members or Reviewer #2 to have very strong opinions that there is one and only one good approach.
Two approaches that I’ve commonly seen are: 1) delete outliers from the sample, or 2) winsorize them (i.e., replace the outlier value with one that is less extreme).
The problem with both of these “solutions” is that they also cause problems — biased parameter estimates and underweighted or eliminated valid values.
Here’s the thing: not all outliers are the same. Some have strong influence, but not all. Some are valid and important data values. Others are simply errors.
So rather than give you a blanket rule or recommendation, I suggest that instead you take some time to figure out two things:
1. Are the outliers actually causing any problems with influence or assumptions?
2. From where did the outliers come? You can’t always tell, but considering different possibilities can help inform the best way to proceed.
Possible causes of outliers
There are really two basic origins of outliers: either they are errors, or they are genuine but extreme values. Errors can occur in measurement, in data entry, or in sampling.
I assume the first two errors are pretty self-explanatory, so let’s talk a little about sampling errors.
The basic idea here is that the outlier is not from the same population as the rest of the sample.
For example, as a consultant I once worked with a data set of English reading scores for bilingual first graders. One student had a very low score, but it turns out that child wasn’t actually bilingual. He spoke another language at home and had not yet learned English. In other words, this student was not from the bilingual first grader population, even though he somehow was included in the study.
Here’s another example of a sampling error that’s a little less obvious. Cognitive psychology and linguistics often use reaction times as dependent variables.
Reaction times are often skewed to the right, but even so, there are often high or low outliers that are beyond the range of times that are reasonable for someone who is actually trying to perform the task at hand.
For example, participants may be told to indicate whether a presented string of letters is an actual word or not, and their time to answer will be recorded under different cognitive loads. Reaction times that are so fast the person couldn’t possibly have read it indicate this response is not part of the population of reaction times to the task. Rather, it’s part of the population of times that result from holding down the space bar for every trial in order to get the experiment over with.
A reaction time that is very slow may indicate a score from the population of reaction times that occur when participants are not paying attention to the task at hand.
Dropping errors like these is entirely reasonable because they’re not from the population you’re trying to measure.
In contrast, it’s not reasonable to assume all long reaction times are errors and it’s not reasonable to “fix” genuine data points.
My advice? Take the time to investigate each one rather than using a simple rule like “delete or winsorize all outliers over 3 standard deviations from the mean.”
Jacqueline says
Thank you for this elaborate explanation. It has made my understanding of outliers much easier than I ever have. Well appreciated.