Updated 8/18/2021
I recently was asked whether to report means from descriptive statistics or from the Estimated Marginal Means with SPSS GLM.
The short answer: Report the Estimated Marginal Means (almost always).
To understand why and the rare case it doesn’t matter, let’s dig in a bit with a longer answer.
First, a marginal mean is the mean response for each category of a factor, adjusted for any other variables in the model (more on this later).
Just about any time you include a factor in a linear model, you’ll want to report the mean for each group. The F test of the model in the ANOVA table will give you a p-value for the null hypothesis that those means are equal. And that’s important.
But you need to see the means and their standard errors to interpret the results. The difference in those means is what measures the effect of the factor. While that difference can also appear in the regression coefficients, looking at the means themselves give you a context and makes interpretation more straightforward. This is especially true if you have interactions in the model.
Some basic info about marginal means
- In SPSS menus, they are in the Options button and in SPSS’s syntax they’re EMMEANS.
- These are called LSMeans in SAS, margins in Stata, and emmeans in R’s emmeans package.
- Although I’m talking about them in the context of linear models, all the software has them in other types of models, including linear mixed models, generalized linear models, and generalized linear mixed models.
- They are also called predicted means, and model-based means. There are probably a few other names for them, because that’s what happens in statistics.
When marginal means are the same as observed means
Let’s consider a few different models. In all of these, our factor of interest, X, is a categorical predictor for which we’re calculating Estimated Marginal Means. We’ll call it the Independent Variable (IV).
Model 1: No other predictors
If you have just a single factor in the model (a one-way anova), marginal means and observed means will be the same.
Observed means are what you would get if you simply calculated the mean of Y for each group of X.
Model 2: Other categorical predictors, and all are balanced
Likewise, if you have other factors in the model, if all those factors are balanced, the estimated marginal means will be the same as the observed means you got from descriptive statistics.
Model 3: Other categorical predictors, unbalanced
Now things change. The marginal mean for our IV is different from the observed mean. It’s the mean for each group of the IV, averaged across the groups for the other factor.
When you’re observing the category an individual is in, you will pretty much never get balanced data. Even when you’re doing random assignment, balanced groups can be hard to achieve.
In this situation, the observed means will be different than the marginal means. So report the marginal means. They better reflect the main effect of your IV—the effect of that IV, averaged across the groups of the other factor.
Model 4: A continuous covariate
When you have a covariate in the model the estimated marginal means will be adjusted for the covariate. Again, they’ll differ from observed means.
It works a little bit differently than it does with a factor. For a covariate, the estimated marginal mean is the mean of Y for each group of the IV at one specific value of the covariate.
By default in most software, this one specific value is the mean of the covariate. Therefore, you interpret the estimated marginal means of your IV as the mean of each group at the mean of the covariate.
This, of course, is the reason for including the covariate in the model–you want to see if your factor still has an effect, beyond the effect of the covariate. You are interested in the adjusted effects in both the overall F-test and in the means.
If you just use observed means and there was any association between the covariate and your IV, some of that mean difference would be driven by the covariate.
For example, say your IV is the type of math curriculum taught to first graders. There are two types. And say your covariate is child’s age, which is related to the outcome: math score.
It turns out that curriculum A has slightly older kids and a higher mean math score than curriculum B. Observed means for each curriculum will not account for the fact that the kids who received that curriculum were a little older. Marginal means will give you the mean math score for each group at the same age. In essence, it sets Age at a constant value before calculating the mean for each curriculum. This gives you a fairer comparison between the two curricula.
But there is another advantage here. Although the default value of the covariate is its mean, you can change this default. This is especially helpful for interpreting interactions, where you can see the means for each group of the IV at both high and low values of the covariate.
In SPSS, you can change this default using syntax, but not through the menus.
For example, in this syntax, the EMMEANS statement reports the marginal means of Y at each level of the categorical variable X at the mean of the Covariate V.
UNIANOVA Y BY X WITH V
/INTERCEPT=INCLUDE
/EMMEANS=TABLES(X) WITH(V=MEAN)
/DESIGN=X V.
If instead, you wanted to evaluate the effect of X at a specific value of V, say 50, you can just change the EMMEANS statement to:
/EMMEANS=TABLES(X) WITH(V=50)
Another good reason to use syntax.
The practice of choosing predictors for a regression model, called model building, is an area of real craft.
There are many possible strategies and approaches and they all work well in some situations. Every one of them requires making a lot of decisions along the way. As you make decisions, one danger to look out for is overfitting—creating a model that is too complex for the the data. (more…)
It’s easy to make things complex without meaning to. Especially in statistical analysis.
Sometimes that complexity is unavoidable. You have ethical and practical constraints on your study design and variable measurement. Or the data just don’t behave as you expected. Or the only research question of interest is one that demands many variables.
But sometimes it isn’t. Seemingly innocuous decisions lead to complicated analyses. These decisions occur early in the design, research questions, or variable choice.
(more…)
Even if you’ve never heard the term Generalized Linear Model, you may have run one. It’s a term for a family of models that includes logistic and Poisson regression, among others.
It’s a small leap to generalized linear models, if you already understand linear models. Many, many concepts are the same in both types of models.
But one thing that’s perplexing to many is why generalized linear models have no error term, like linear models do. (more…)
Just about everyone who does any data analysis has used a chi-square test. Probably because there are quite a few of them, and they’re all useful.
But it gets confusing because very often you’ll just hear them called “Chi-Square test” without their full, formal name. And without that context, it’s hard to tell exactly what hypothesis that test is testing. (more…)
Missing data are a widespread problem, as most researchers can attest. Whether data are from surveys, experiments, or secondary sources, missing data abounds.
But what’s the impact on the results of statistical analysis? That depends on two things: the mechanism that led the data to be missing and the way in which the data analyst deals with it.
Here are a few common situations:
Subjects in longitudinal studies often start, but drop out before the study is completed. There are many reasons for this: they have moved out of the area (nothing related to the study), died (hopefully not related to the study), no longer see personal benefit to participating, or do not like the effects of the treatment.
Surveys suffer missing data in many ways. When participants refuse to answer the entire survey or parts of it; do not know the answer to, or accidentally skip an item. Some survey researchers even design the study so that some questions are asked of only a subset of participants.
Experimental studies have missing data when a researcher is simply unable to collect an observation. Bad weather conditions may render observation impossible in field experiments. A researcher becomes sick or equipment fails. Data may be missing in any type of study due to accidental or data entry error. A researcher drops a tray of test tubes. A data file becomes corrupt.
Most researchers are very familiar with one (or more) of these situations.
Why Missing Data Matters
Missing data cause problems because most statistical procedures require a value for each variable. When a data set is incomplete, the data analyst has to decide how to deal with it.
The most common decision is to use complete case analysis (also called listwise deletion). This means analyzing only the cases with complete data. Individuals with data missing on any variables are dropped from the analysis.
It has advantages–it is easy to use, is very simple, and is the default in most statistical packages. But it has limitations.
It can substantially lower the sample size, leading to a severe lack of power. This is especially true if there are many variables involved in the analysis, each with data missing for a few cases.
Possibly worse, it can also lead to biased results, depending on why and in which patterns the data are missing.
Missing Data Mechanisms
The types of missing data fit into three classes, which are based on the relationship between the missing data mechanism and the missing and observed values. These badly-named classes are important to understand because the problems caused by missing data and the solutions to these problems are different for the three classes.
Missing Completely at Random
The first is Missing Completely at Random (MCAR). MCAR means that the missing data mechanism is unrelated to the values of any variables, whether missing or observed.
Data that are missing because a researcher dropped the test tubes or survey participants accidentally skipped questions are likely to be MCAR.
If the observed values are essentially a random sample of the full data set, complete case analysis gives the same results as the full data set would have. Unfortunately, most missing data are not MCAR.
Missing Not at Random
At the opposite end of the spectrum is Missing Not at Random. Although you’ll most often see it called this, I prefer the term Non-Ignorable (NI). NI is a name that is not so easy to confuse with the other types, but it also tells you its primary feature. It means that the missing data mechanism is related to the missing values.
And this is something you, the data analyst, can’t ignore without biasing results.
It occurs sometimes when people do not want to reveal something very personal or unpopular about themselves. For example, if individuals with higher incomes are less likely to reveal them on a survey than are individuals with lower incomes, the missing data mechanism for income is non-ignorable. Whether income is missing or observed is related to its value.
But that’s not the only example. When the sickest patients drop out of a longitudinal study testing a drug that’s supposed to make them healthy, that’s non-ignorable.
Or an instrument can’t detect low readings, so gives you an error, also non-ignorable.
Complete case analysis can give highly biased results for NI missing data. If proportionally more low and moderate income individuals are left in the sample because high income people are missing, an estimate of the mean income will be lower than the actual population mean.
Missing at Random
In between these two extremes is Missing at Random (MAR). MAR requires that the cause of the missing data is unrelated to the missing values but may be related to the observed values of other variables.
MAR means that the missing values are related to observed values on other variables. As an example of CD missing data, missing income data may be unrelated to the actual income values but are related to education. Perhaps people with more education are less likely to reveal their income than those with less education.
A key distinction is whether the mechanism is ignorable (i.e., MCAR or MAR) or non-ignorable. There are excellent techniques for handling ignorable missing data. Non-ignorable missing data are more challenging and require a different approach.
First Published 2/24/2014;
Updated 5/11/21 to give more detail.