Just last week, a colleague mentioned that while he does a lot of study design these days, he no longer does much data analysis.
His main reason was that 80% of the work in data analysis is preparing the data for analysis. Data preparation is s-l-o-w and he found that few colleagues and clients understood this.
Consequently, he was running into expectations that he should analyze a raw data set in an hour or so.
You know, by clicking a few buttons.
I see this as well with researchers new to data analysis. While they know it will take longer than an hour, they still have unrealistic expectations about how long it takes.
So I am here to tell you, the time-consuming part is preparing the data. Weeks or months is a realistic time frame. Hours is not.
(Feel free to send this to your colleagues who want instant results.)
There are three parts to preparing data: cleaning it, creating necessary variables, and formatting all variables.
Data Cleaning
Data cleaning means finding and eliminating errors in the data. How you approach it depends on how large the data set is, but the kinds of things you’re looking for are:
- Impossible or otherwise incorrect values for specific variables
- Cases in the data who met exclusion criteria and shouldn’t be in the study
- Duplicate cases
- Missing data and outliers (don’t delete all outliers, but you may need to investigate to see if one is an error)
- Skip-pattern or logic breakdowns
- Making sure that the same value of string variables is always written the same way (male ≠ Male in most statistical software).
You can’t avoid data cleaning and it always takes a while, but there are ways to make it more efficient. For example, rather than search through the data set for impossible values, print a table of data values outside a normal range, along with subject ids.
This is where learning how to code in your statistical software of choice really helps. You’ll need to subset your data using IF statements to find those impossible values.
But if your data set is anything but small, you can also save yourself a lot of time, code, and errors by incorporating efficiencies like loops and macros so that you can perform some of these checks on many variables at once.
Creating New Variables
Once the data are free of errors, you need to set up the variables that will directly answer your research questions.
It’s a rare data set in which every variable you need is measured directly.
So you may need to do a lot of recoding and computing of variables.
Examples include:
And of course, part of creating each new variable is double-checking that it worked correctly.
Formatting Variables
Both original and newly created variables need to be formatted correctly for two reasons:
First, so your software works with them correctly. Failing to format a missing value code or a dummy variable correctly will have major consequences for your data analysis.
Second, it’s much faster to run the analyses and interpret results if you don’t have to keep looking up which variable Q156 is.
Examples include:
- Setting all missing data codes so missing data are treated as such
- Formatting date variables as dates, numerical variables as numbers, etc.
- Labeling all variables and categorical values so you don’t have to keep looking them up.
All three of these steps require a solid knowledge of how to manage data in your statistical software. Each one approaches them a little differently.
It’s also very important to keep track of and be able to easily redo all your steps. Always assume you’ll have to redo something. So use (or record) syntax, not only menus.
Recently I gave a webinar The Steps to Running Any Statistical Model. A few hundred people were live on the webinar. We held a Q&A session at the end, but as you can imagine, we didn’t have time to get through all the questions.
This is the first in a series of written answers to some of those questions. I’ve tried to sort them by the step each is about.
A written list of the steps is available here.
If you missed the webinar, you can view the video here. It’s free.
Questions about Step 1. Write out research questions in theoretical and operational terms
Q: In using secondary data research designing, have you found that this type of data source affects the research question? That is, should one have a strong understanding of the data to ensure their theoretical concept can be operational to fit the data? My research question changes the more I learn.
Yes. There’s no point in asking research questions that the data you have available can’t answer.
So the order of the steps would have to change—you may have to start with a vague idea of the type of research question you want to ask, but only refine it after doing some descriptive statistics, or even running an initial model.
Q: How soon in the process should one start with the first group of steps?
You want to at least start thinking about them as you’re doing the lit review and formulating your research questions.
Think about how you could measure variables, which ones are likely to be collinear or have a lot of missing data. Think about the kind of model you’d have to do for each research question.
Think of a scenario where the same research question could be operationalized such that the dependent variable is measured either continuous or ordered categories. An easy example is income in dollars measured by actual income or by income categories.
By all means, if people can answer the question with a real and accurate number, your analysis will be much, much easier. In many situations, they can’t. They won’t know, remember, or tell you their exact income. If so, you may have to use categories to prevent missing data. But these are things to think about early.
Q: where in the process do you use existing lit/results to shape the research question and modeling?
I would start by putting the literature review before Step 1. You’ll use that to decide on a theoretical research question, as well as ways to operationalize it..
But it will help you other places as well. For example, it helps the sample size calculations to have variance estimates from other studies. Other studies may give you an idea of variables that are likely to have missing data, too little variation to include as predictors. They may change your exploratory factor analysis in Step 7 to a confirmatory one.
In fact, just about every step can benefit from a good literature review.
If you missed the webinar, you can view the video here. It’s free.
I am reviewing your notes from your workshop on assumptions. You have made it very clear how to analyze normality for regressions, but I could not find how to determine normality for ANOVAs. Do I check for normality for each independent variable separately? Where do I get the residuals? What plots do I run? Thank you!
I received this great question this morning from a past participant in my Assumptions of Linear Models workshop.
It’s one of those quick questions without a quick answer. Or rather, without a quick and useful answer. The quick answer is:
Do it exactly the same way. All of it.
The longer, useful answer is this: (more…)
Knowing the right statistical analysis to use in any data situation, knowing how to run it, and being able to understand the output are all really important skills for statistical analysis. Really important.
But they’re not the only ones.
Another is having a system in place to keep track of the analyses. This is especially important if you have any collaborators (or a statistical consultant!) you’ll be sharing your results with. You may already have an effective work flow, but if you don’t, here are some strategies I use. I hope they’re helpful to you.
1. Always use Syntax Code
All the statistical software packages have come up with some sort of easy-to-use, menu-based approach. And as long as you know what you’re doing, there is nothing wrong with using the menus. While I’m familiar enough with SAS code to just write it, I use menus all the time in SPSS.
But even if you use the menus, paste the syntax for everything you do. There are many reasons for using syntax, but the main one is documentation. Whether you need to communicate to someone else or just remember what you did, syntax is the only way to keep track. (And even though, in the midst of analyses, you believe you’ll remember how you did something, a week and 40 models later, I promise you won’t. I’ve been there too many times. And it really hurts when you can’t replicate something).
In SPSS, there are two things you can do to make this seamlessly easy. First, instead of hitting OK, hit Paste. Second, make sure syntax shows up on the output. This is the default in later versions, but you can turn in on in Edit–>Options–>Viewer. Make sure “Display Commands in Log” and “Log” are both checked. (Note: the menus may differ slightly across versions).
2. If your data set is large, create smaller data sets that are relevant to each set of analyses.
First, all statistical software needs to read the entire data set to do many analyses and data manipulation. Since that same software is often a memory hog, running anything on a large data set will s-l-o-w down processing. A lot.
Second, it’s just clutter. It’s harder to find the variables you need if you have an extra 400 variables in the data set.
3. Instead of just opening a data set manually, use commands in your syntax code to open data sets.
Why? Unless you are committing the cardinal sin of overwriting your original data as you create new variables, you have multiple versions of your data set. Having the data set listed right at the top of the analysis commands makes it crystal clear which version of the data you analyzed.
I know you remember today that your variable labeled Mar4cat means marital status in 4 categories and that 0 indicates ‘never married.’ It’s so logical, right? Well, it’s not obvious to your collaborators and it won’t be obvious to you in two years, when you try to re-analyze the data after a reviewer doesn’t like your approach.
Even if you have a separate code book, why not put it right in the data? It makes the output so much easier to read, and you don’t have to worry about losing the code book. It may feel like more work upfront, but it will save time in the long run.
5. Put data manipulation, descriptive analyses, and models in separate syntax files
When I do data analysis, I follow my Steps approach, which means first I create all the relevant variables, then run univariate and bivariate statistics, then initial models, and finally hone the models.
And I’ve found that if I keep each of these steps in separate program files, it makes it much easier to keep track of everything. If you’re creating new variables in the middle of analyses, it’s going to be harder to find the code so you can remember exactly how you created that variable.
6. As you run different versions of models, label them with model numbers
When you’re building models, you’ll often have a progression of different versions. Especially when I have to communicate with a collaborator, I’ve found it invaluable to number these models in my code and print that model number on the output. It makes a huge difference in keeping track of nine different models.
7. As you go along with different analyses, keep your syntax clean, even if the output is a mess.
Data analysis is a bit of an iterative process. You try something, discover errors, realize that variable didn’t work, and try something else. Yes, base it on theory and have a clear analysis plan, but even so, the first analyses you run won’t be your last.
Especially if you make mistakes as you go along (as I inevitably do), your output gets pretty littered with output you don’t want to keep. You could clean it up as you go along, but I find that’s inefficient. Instead, I try to keep my code clean, with only the error-free analyses that I ultimately want to use. It lets me try whatever I need to without worry. Then at the end, I delete the entire output and just rerun all code.
One caveat here: You may not want to go this approach if you have VERY computing intensive analyses, like a generalized linear mixed model with crossed random effects on a large data set. If your code takes more than 20 minutes to run, this won’t be more efficient.
8. Use titles and comments liberally
I’m sure you’ve heard before that you should use lots of comments in your syntax code. But use titles too. Both SAS and SPSS have title commands that allow titles to be printed right on the output. This is especially helpful for naming and numbering all those models in #6.
9. Name output, log, and programs the same
Since you’ve split your programs into separate files for data manipulations, descriptives, initial models, etc. you’re going to end up with a lot of files. What I do is name each output the same name as the program file. (And if I’m in SAS, the log too-yes, save the log).
Yes, that means making sure you have a separate output for each section. While it may seem like extra work, it can make looking at each output less overwhelming for anyone you’re sharing it with.
This year I hired a Quickbooks consultant to bring my bookkeeping up from the stone age. (I had been using Excel).
She had asked for some documents with detailed data, and I tried to send her something else as a shortcut. I thought it was detailed enough. It wasn’t, so she just fudged it. The bottom line was all correct, but the data that put it together was all wrong.
I hit the roof.Internally, only—I realized it was my own fault for not giving her the info she needed. She did a fabulous job.
But I could not leave the data fudged, even if it all added up to the right amount, and already reconciled. I had to go in and spend hours fixing it. Truthfully, I was a bit of a compulsive nut about it.
And then I had to ask myself why I was so uptight—if accountants think the details aren’t important, why do I? Statisticians are all about approximations and accountants are exact, right?
As it turns out, not so much.
But I realized I’ve had 20 years of training about the importance of data integrity. Sure, the results might be inexact, the analysis, the estimates, the conclusions. But not the data. The data must be clean.
Sparkling, if possible.
In research, it’s okay if the bottom line is an approximation. Because we’re never really measuring the whole population. And we can’t always measure precisely what we want to measure. But in the long run, it all averages out.
But only if the measurements we do have are as accurate as they possibly can be.
Two designs commonly used in epidemiology are the cohort and case-control studies. Both study causal relationships between a risk factor and a disease. What is the difference between these two designs? And when should you opt for the one or the other?
Cohort studies
Cohort studies begin with a group of people (a cohort) free of disease. The people in the cohort are grouped by whether or not they are exposed to a potential cause of disease. The whole cohort is followed over time to see if (more…)