The first real data set I ever analyzed was from my senior honors thesis as an undergraduate psychology major. I had taken both intro stats and an ANOVA class, and I applied all my new skills with gusto, analyzing every which way.
It wasn’t too many years into graduate school that I realized that these data analyses were a bit haphazard and not at all well thought out. 20 years of data analysis experience later and I realized that’s just a symptom of being an inexperienced data analyst.
But even experienced data analysts can get off track, especially with large data sets with many variables. It’s just so easy to try one thing, then another, and pretty soon you’ve spent weeks getting nowhere.
(more…)
Two methods for dealing with missing data, vast improvements over traditional approaches, have become available in mainstream statistical software in the last few years.
Both of the methods discussed here require that the data are missing at random–not related to the missing values. If this assumption holds, resulting estimates (i.e., regression coefficients and standard errors) will be unbiased with no loss of power.
The first method is Multiple Imputation (MI). Just like the old-fashioned imputation (more…)
Q: Do most high impact journals require authors to state which method has been used on missing data?
I don’t usually get far enough in the publishing process to read journal requirements.
But based on my conversations with researchers who both review articles for journals and who deal with reviewers’ comments, I can offer this response.
I would be shocked if journal editors at top journals didn’t want information about the missing data technique. If you leave it out, they’ll either assume you didn’t have missing data or are using defaults like listwise deletion. (more…)
Sure. One of the big advantages of multiple imputation is that you can use it for any analysis.
It’s one of the reasons big data libraries use it–no matter how researchers are using the data, the missing data is handled the same, and handled well.
I say this with two caveats. (more…)
I recently received this question:
I have scale which I want to run Chronbach’s alpha on. One response category for all items is ‘not applicable’. I want to run Chronbach’s alpha requiring that at least 50% of the items must be answered for the scale to be defined. Where this is the case then I want all missing values on that scale replaced by the average of the non-missing items on that scale. Is this reasonable? How would I do this in SPSS?
My Answer:
In RELIABILITY, the SPSS command for running a Cronbach’s alpha, the only options for Missing Data (more…)
Do you find quizzes irresistible? I do.
Here’s a little quiz about working with missing data:
True or False?
1. Imputation is really just making up data to artificially inflate results. It’s better to just drop cases with missing data than to impute.
2. I can just impute the mean for any missing data. It won’t affect results, and improves power.
3. Multiple Imputation is fine for the predictor variables in a statistical model, but not for the response variable.
4. Multiple Imputation is always the best way to deal with missing data.
5. When imputing, it’s important that the imputations be plausible data points.
6. Missing data isn’t really a problem if I’m just doing simple statistics, like chi-squares and t-tests.
7. The worst thing that missing data does is lower sample size and reduce power.
Answers: (more…)