Regression is one of the most common analyses in statistics. Most of us learn it in grad school, and we learned it in a specific software. Maybe SPSS, maybe another software package. The thing is, depending on your training and when you did it, there is SO MUCH to know about doing a regression analysis in SPSS.
(more…)
You may have heard that using SPSS syntax is more efficient, gives you more control, and ultimately saves you time and frustration. It’s all true.
….And yet you probably use SPSS because you don’t want to code. You like the menus.
I get it.
I like the menus, too, and I use them all the time.
But I use syntax just as often.
At some point, if you want to do serious data analysis, you have to start using syntax. (more…)
In this 10-part tutorial, you will learn how to get started using SPSS for data preparation, analysis, and graphing. This tutorial will give you the skills to start using SPSS on your own. You will need a license to SPSS and to have it installed before you begin.
(more…)
Updated 8/18/2021
I recently was asked whether to report means from descriptive statistics or from the Estimated Marginal Means with SPSS GLM.
The short answer: Report the Estimated Marginal Means (almost always).
To understand why and the rare case it doesn’t matter, let’s dig in a bit with a longer answer.
First, a marginal mean is the mean response for each category of a factor, adjusted for any other variables in the model (more on this later).
Just about any time you include a factor in a linear model, you’ll want to report the mean for each group. The F test of the model in the ANOVA table will give you a p-value for the null hypothesis that those means are equal. And that’s important.
But you need to see the means and their standard errors to interpret the results. The difference in those means is what measures the effect of the factor. While that difference can also appear in the regression coefficients, looking at the means themselves give you a context and makes interpretation more straightforward. This is especially true if you have interactions in the model.
Some basic info about marginal means
- In SPSS menus, they are in the Options button and in SPSS’s syntax they’re EMMEANS.
- These are called LSMeans in SAS, margins in Stata, and emmeans in R’s emmeans package.
- Although I’m talking about them in the context of linear models, all the software has them in other types of models, including linear mixed models, generalized linear models, and generalized linear mixed models.
- They are also called predicted means, and model-based means. There are probably a few other names for them, because that’s what happens in statistics.
When marginal means are the same as observed means
Let’s consider a few different models. In all of these, our factor of interest, X, is a categorical predictor for which we’re calculating Estimated Marginal Means. We’ll call it the Independent Variable (IV).
Model 1: No other predictors
If you have just a single factor in the model (a one-way anova), marginal means and observed means will be the same.
Observed means are what you would get if you simply calculated the mean of Y for each group of X.
Model 2: Other categorical predictors, and all are balanced
Likewise, if you have other factors in the model, if all those factors are balanced, the estimated marginal means will be the same as the observed means you got from descriptive statistics.
Model 3: Other categorical predictors, unbalanced
Now things change. The marginal mean for our IV is different from the observed mean. It’s the mean for each group of the IV, averaged across the groups for the other factor.
When you’re observing the category an individual is in, you will pretty much never get balanced data. Even when you’re doing random assignment, balanced groups can be hard to achieve.
In this situation, the observed means will be different than the marginal means. So report the marginal means. They better reflect the main effect of your IV—the effect of that IV, averaged across the groups of the other factor.
Model 4: A continuous covariate
When you have a covariate in the model the estimated marginal means will be adjusted for the covariate. Again, they’ll differ from observed means.
It works a little bit differently than it does with a factor. For a covariate, the estimated marginal mean is the mean of Y for each group of the IV at one specific value of the covariate.
By default in most software, this one specific value is the mean of the covariate. Therefore, you interpret the estimated marginal means of your IV as the mean of each group at the mean of the covariate.
This, of course, is the reason for including the covariate in the model–you want to see if your factor still has an effect, beyond the effect of the covariate. You are interested in the adjusted effects in both the overall F-test and in the means.
If you just use observed means and there was any association between the covariate and your IV, some of that mean difference would be driven by the covariate.
For example, say your IV is the type of math curriculum taught to first graders. There are two types. And say your covariate is child’s age, which is related to the outcome: math score.
It turns out that curriculum A has slightly older kids and a higher mean math score than curriculum B. Observed means for each curriculum will not account for the fact that the kids who received that curriculum were a little older. Marginal means will give you the mean math score for each group at the same age. In essence, it sets Age at a constant value before calculating the mean for each curriculum. This gives you a fairer comparison between the two curricula.
But there is another advantage here. Although the default value of the covariate is its mean, you can change this default. This is especially helpful for interpreting interactions, where you can see the means for each group of the IV at both high and low values of the covariate.
In SPSS, you can change this default using syntax, but not through the menus.
For example, in this syntax, the EMMEANS statement reports the marginal means of Y at each level of the categorical variable X at the mean of the Covariate V.
UNIANOVA Y BY X WITH V
/INTERCEPT=INCLUDE
/EMMEANS=TABLES(X) WITH(V=MEAN)
/DESIGN=X V.
If instead, you wanted to evaluate the effect of X at a specific value of V, say 50, you can just change the EMMEANS statement to:
/EMMEANS=TABLES(X) WITH(V=50)
Another good reason to use syntax.
Every so often I point out to a client who exclusively uses menus in SPSS that they can (and should) hit the Paste button instead of OK. Many times, the client never realized it was there.
I am here today to tell you that it is there, and it is wonderful. For a few reasons.
When you use the menus in SPSS, you’re really taking a shortcut. You’re telling SPSS which syntax commands, along with which options, you want to run.
Clicking OK at the end of a dialog box will run the menu options you just picked. You may never see the underlying commands that SPSS just ran.
If instead you hit Paste, those command won’t automatically be run, but will instead the code to run those commands will be (more…)
One of the places that SPSS syntax excels at efficiency is when you’re creating new variables. This is especially true when you’re creating a LOT of new variables, but even one or two can be quicker if you write the syntax code instead of menus.
And just as importantly, you’ll have documentation for exactly how you created them. (You think you’ll remember now, but 75 new variables later, you’ll thank me).
So once you create a new variable, you should of course immediately assign a Variable Label, and if appropriate, Value Labels and Missing Data Codes using Syntax.
Another thing that helps keep your new variable clean and interpretable is to assign the format. The default format is F8.2, which indicates a numerical value
You could go into the Variable View screen and manually change the Width and Decimals columns, which indicate how many characters go before and after (for numeric variables) the decimal point.
But why do all that when you can just use a single command to define multiple variables?
The syntax command is FORMATS. Here is the command for some common formats:
FORMATS NumVar1 NumVar2 (F5.0)
/NumVar3 (F6.1)
/StringVar1 (A15).
You can see the FORMATS command is followed by the variable names, then the format in parentheses.
Numeric variables NumVar1 and Numvar2 will both get the same format: with 5 digits, and nothing after the decimal.
Numeric variable NumVar3 will have 6 digits total, with one after the decimal.
And string variable (i.e. its value contain letters) StringVar1 is 15 characters wide.
This will get you started, but you can get all the specifics in the FORMATS section of the Command Syntax Reference, which is included in the SPSS help.
[Note: Edited explanation of F6.1 to be 6 digits total, not 6 digits before the decimal).