Knowing the right statistical analysis to use in any data situation, knowing how to run it, and being able to understand the output are all really important skills for statistical analysis. Really important.
But they’re not the only ones.
Another is having a system in place to keep track of the analyses. This is especially important if you have any collaborators (or a statistical consultant!) you’ll be sharing your results with. You may already have an effective work flow, but if you don’t, here are some strategies I use. I hope they’re helpful to you.
1. Always use Syntax Code
All the statistical software packages have come up with some sort of easy-to-use, menu-based approach. And as long as you know what you’re doing, there is nothing wrong with using the menus. While I’m familiar enough with SAS code to just write it, I use menus all the time in SPSS.
But even if you use the menus, paste the syntax for everything you do. There are many reasons for using syntax, but the main one is documentation. Whether you need to communicate to someone else or just remember what you did, syntax is the only way to keep track. (And even though, in the midst of analyses, you believe you’ll remember how you did something, a week and 40 models later, I promise you won’t. I’ve been there too many times. And it really hurts when you can’t replicate something).
In SPSS, there are two things you can do to make this seamlessly easy. First, instead of hitting OK, hit Paste. Second, make sure syntax shows up on the output. This is the default in later versions, but you can turn in on in Edit–>Options–>Viewer. Make sure “Display Commands in Log” and “Log” are both checked. (Note: the menus may differ slightly across versions).
2. If your data set is large, create smaller data sets that are relevant to each set of analyses.
First, all statistical software needs to read the entire data set to do many analyses and data manipulation. Since that same software is often a memory hog, running anything on a large data set will s-l-o-w down processing. A lot.
Second, it’s just clutter. It’s harder to find the variables you need if you have an extra 400 variables in the data set.
3. Instead of just opening a data set manually, use commands in your syntax code to open data sets.
Why? Unless you are committing the cardinal sin of overwriting your original data as you create new variables, you have multiple versions of your data set. Having the data set listed right at the top of the analysis commands makes it crystal clear which version of the data you analyzed.
4. Use Variable and Value labels religiously
I know you remember today that your variable labeled Mar4cat means marital status in 4 categories and that 0 indicates ‘never married.’ It’s so logical, right? Well, it’s not obvious to your collaborators and it won’t be obvious to you in two years, when you try to re-analyze the data after a reviewer doesn’t like your approach.
Even if you have a separate code book, why not put it right in the data? It makes the output so much easier to read, and you don’t have to worry about losing the code book. It may feel like more work upfront, but it will save time in the long run.
5. Put data manipulation, descriptive analyses, and models in separate syntax files
When I do data analysis, I follow my Steps approach, which means first I create all the relevant variables, then run univariate and bivariate statistics, then initial models, and finally hone the models.
And I’ve found that if I keep each of these steps in separate program files, it makes it much easier to keep track of everything. If you’re creating new variables in the middle of analyses, it’s going to be harder to find the code so you can remember exactly how you created that variable.
6. As you run different versions of models, label them with model numbers
When you’re building models, you’ll often have a progression of different versions. Especially when I have to communicate with a collaborator, I’ve found it invaluable to number these models in my code and print that model number on the output. It makes a huge difference in keeping track of nine different models.
7. As you go along with different analyses, keep your syntax clean, even if the output is a mess.
Data analysis is a bit of an iterative process. You try something, discover errors, realize that variable didn’t work, and try something else. Yes, base it on theory and have a clear analysis plan, but even so, the first analyses you run won’t be your last.
Especially if you make mistakes as you go along (as I inevitably do), your output gets pretty littered with output you don’t want to keep. You could clean it up as you go along, but I find that’s inefficient. Instead, I try to keep my code clean, with only the error-free analyses that I ultimately want to use. It lets me try whatever I need to without worry. Then at the end, I delete the entire output and just rerun all code.
One caveat here: You may not want to go this approach if you have VERY computing intensive analyses, like a generalized linear mixed model with crossed random effects on a large data set. If your code takes more than 20 minutes to run, this won’t be more efficient.
8. Use titles and comments liberally
I’m sure you’ve heard before that you should use lots of comments in your syntax code. But use titles too. Both SAS and SPSS have title commands that allow titles to be printed right on the output. This is especially helpful for naming and numbering all those models in #6.
9. Name output, log, and programs the same
Since you’ve split your programs into separate files for data manipulations, descriptives, initial models, etc. you’re going to end up with a lot of files. What I do is name each output the same name as the program file. (And if I’m in SAS, the log too-yes, save the log).
Yes, that means making sure you have a separate output for each section. While it may seem like extra work, it can make looking at each output less overwhelming for anyone you’re sharing it with.
Biftu Geda says
Thank you very much and well received
David Pressley says
Don’t forget about the superpowers of Git for file-level version control. When you write, you are indeed doing “software development”. Doesn’t mean you are creating a website to sell your influencer product line of goopy, caffeinated eye creams or Andrew Tate – Joe Rogan collab “CBD” energy drinks. It just means you recognize the importance of documentation–inline and at the file level. GitHub is your best friend. You already understand the irritating nuance of SAS and the hairball syntax of unique, snowflakey R packages, so knowing a handful of `git` commands, some infrastructure and process set-up shouldn’t be too big a leap, and will pay huge dividends in the long term for your practice.
It’s a different way to think about versioning, but means you can have a single analysis file (project1-analysis.sas) with changes managed by git, instead of multiple files nested in multiple folders with names like: project1-analysis-v1-FINAL.sas or project1-analysis-v1-FINAL-v2-DRAFT.sas or project1-analysis-v1-FINAL-FINAL-v2.2-FINAL-DRAFT-DPedits.sas
For analyses that need to be finalized, GitHub Actions can be configured with your testing code and log files for automated testing workflows and approvals.
For analyses that are final, I usually tag my source code, input datasets and final datasets with GitLFS (Large File Storage). In this use case you can store final datasets and easily collaborate with Gwen, Andrew, and Joe with minimal friction.
Get in touch if you need help, I would love to bring folks into the fold!
Crispin Matere says
Karen,
An awesome article. Regardless of approach, software used, the main idea is to be able to track the steps that led to your both intermediate and final result of analysis. Thank you so much Karen and everyone else you have offered suggestions.
Clare says
These are some great points! I have to admit, though, I adopted most of them many years ago but have STILL often found myself in a tangled mess when going back to previous analyses two years later. – or even two weeks! I’ve learned the hard way to journal my thought journey too – and to include with them the exact filenames and variable names that I used each time I’ve run a session. So I can actually go back (and also search the journal text – it can just be an unsophisticated Word file, and once upon a time used to be a hardback notebook with sticky tabs) and see exactly what I did, what problems I ran into, and what I concluded I should do about them.
In other words, I’m just as unlikely to remember my own thinknig as my crazy variable names – it ALL needs jotting down (or maybe dictating). Tedious, but so much better than kicking myself!
Finally, I once spent three years analysing a dataset in Stata and not quite finishing, before changing employer and no longer having access to it. D’oh! So I’d also recommend saving a backup of important logfiles, codebook files, output etc. in a *generically* readable form such as PDF or text. It’s less of a problem than it used to be as the packages are more interoperable now, but still might be wise.
Silvia S says
This is great. Some of these suggestions I already follow, but some are new to me. I will share this with my students!
Jon K Peck says
SPSS
It’s not the Draft Viewer. It’s just the Viewer. The Draft Viewer went away many years ago.
Display Commands in the Log – yes definitely. But whether the Log is shown or not, it’s always there and can be viewed when needed.
Don’t forget about the journal file. By default, it is always watching and recording the syntax.
Use the CODEBOOK procedure (Analyze > Reports > Codebook) and use the DOCUMENT command to, well, document the data. Those
notes are saved with the data file.
Use custom variable and datafile attributes to supplement what you can do with variable and value labels. These can be anything you
want and are saved with the data.
Use the TEXT extension command in syntax to add comments that will automatically appear in the output when run. Better than
titles.
Karen Grace-Martin says
Thanks, Jon. I’ll update that. These are all great suggestions.
Ian Shannon says
Karen,
This is absolutely fantastic advice. I always struggle to follow most of these strategies, some very consistently, others less so.
A suggestion for keeping track of model iterations is to use git. GIT was set up for software development and although statistical model building etc is a different activity, the parallels with software development are close. GIT will require an added discipline of file location, but the use of plain text files (or files that are readable by a text editor) for most stats work make GIT very suitable in helping to keep track of development and experimentation.
Ian
360DigiTMG says
Nice article. I liked very much. All the information given by you are really helpful for my research. keep on posting your views.