Stage 2

Why report estimated marginal means?

August 18th, 2021 by

Updated 8/18/2021

I recently was asked whether to report means from descriptive statistics or from the Estimated Marginal Means with SPSS GLM.Stage 2

The short answer: Report the Estimated Marginal Means (almost always).

To understand why and the rare case it doesn’t matter, let’s dig in a bit with a longer answer.

First, a marginal mean is the mean response for each category of a factor, adjusted for any other variables in the model (more on this later).

Just about any time you include a factor in a linear model, you’ll want to report the mean for each group. The F test of the model in the ANOVA table will give you a p-value for the null hypothesis that those means are equal. And that’s important.

But you need to see the means and their standard errors to interpret the results. The difference in those means is what measures the effect of the factor. While that difference can also appear in the regression coefficients, looking at the means themselves give you a context and makes interpretation more straightforward. This is especially true if you have interactions in the model.

Some basic info about marginal means

  • In SPSS menus, they are in the Options button and in SPSS’s syntax they’re EMMEANS.
  • These are called LSMeans in SAS, margins in Stata, and emmeans in R’s emmeans package.
  • Although I’m talking about them in the context of linear models, all the software has them in other types of models, including linear mixed models, generalized linear models, and generalized linear mixed models.
  • They are also called predicted means, and model-based means. There are probably a few other names for them, because that’s what happens in statistics.

When marginal means are the same as observed means

Let’s consider a few different models. In all of these, our factor of interest, X, is a categorical predictor for which we’re calculating Estimated Marginal Means. We’ll call it the Independent Variable (IV).

Model 1: No other predictors

If you have just a single factor in the model (a one-way anova), marginal means and observed means will be the same.

Observed means are what you would get if you simply calculated the mean of Y for each group of X.

Model 2: Other categorical predictors, and all are balanced

Likewise, if you have other factors in the model, if all those factors are balanced, the estimated marginal means will be the same as the observed means you got from descriptive statistics.

Model 3: Other categorical predictors, unbalanced

Now things change. The marginal mean for our IV is different from the observed mean. It’s the mean for each group of the IV, averaged across the groups for the other factor.

When you’re observing the category an individual is in, you will pretty much never get balanced data. Even when you’re doing random assignment, balanced groups can be hard to achieve.

In this situation, the observed means will be different than the marginal means. So report the marginal means. They better reflect the main effect of your IV—the effect of that IV, averaged across the groups of the other factor.

Model 4: A continuous covariate

When you have a covariate in the model the estimated marginal means will be adjusted for the covariate. Again, they’ll differ from observed means.

It works a little bit differently than it does with a factor. For a covariate, the estimated marginal mean is the mean of Y for each group of the IV at one specific value of the covariate.

By default in most software, this one specific value is the mean of the covariate. Therefore, you interpret the estimated marginal means of your IV as the mean of each group at the mean of the covariate.

This, of course, is the reason for including the covariate in the model–you want to see if your factor still has an effect, beyond the effect of the covariate.  You are interested in the adjusted effects in both the overall F-test and in the means.

If you just use observed means and there was any association between the covariate and your IV, some of that mean difference would be driven by the covariate.

For example, say your IV is the type of math curriculum taught to first graders. There are two types. And say your covariate is child’s age, which is related to the outcome: math score.

It turns out that curriculum A has slightly older kids and a higher mean math score than curriculum B. Observed means for each curriculum will not account for the fact that the kids who received that curriculum were a little older. Marginal means will give you the mean math score for each group at the same age. In essence, it sets Age at a constant value before calculating the mean for each curriculum. This gives you a fairer comparison between the two curricula.

But there is another advantage here. Although the default value of the covariate is its mean, you can change this default.  This is especially helpful for interpreting interactions, where you can see the means for each group of the IV at both high and low values of the covariate.

In SPSS, you can change this default using syntax, but not through the menus.

For example, in this syntax, the EMMEANS statement reports the marginal means of Y at each level of the categorical variable X at the mean of the Covariate V.

UNIANOVA Y BY X WITH V
/INTERCEPT=INCLUDE
/EMMEANS=TABLES(X) WITH(V=MEAN)
/DESIGN=X V.

If instead,  you wanted to evaluate the effect of X at a specific value of V, say 50, you can just change the EMMEANS statement to:

/EMMEANS=TABLES(X) WITH(V=50)

Another good reason to use syntax.


Overfitting in Regression Models

August 9th, 2021 by

The practice of choosing predictors for a regression model, called model building, is an area of real craft.Stage 2

There are many possible strategies and approaches and they all work well in some situations. Every one of them requires making a lot of decisions along the way. As you make decisions, one danger to look out for is overfitting—creating a model that is too complex for the the data. (more…)


Member Training: An Introduction into the Grammar of Graphics

June 1st, 2021 by

As it has been said a picture is worth a thousand words and so it is with graphics too. A well constructed graph can summarize information collected from tens to hundreds or even thousands of data points. But not every graph has the same power to convey complex information clearly. (more…)


Member Training: Statistical Contrasts

March 31st, 2021 by


Statistical contrasts are a tool for testing specific hypotheses and model effects, particularly comparing specific group means.

(more…)


Member Training: Goodness of Fit Statistics

March 4th, 2021 by


What are goodness of fit statistics? Is the definition the same for all types of statistical model? Do we run the same tests for all types of statistic model?

(more…)


Confusing Statistical Term #9: Multiple Regression Model and Multivariate Regression Model

February 20th, 2021 by

Much like General Linear Model and Generalized Linear Model in #7, there are many examples in statistics of terms with (ridiculously) similar names, but nuanced meanings.
Stage 2

Today I talk about the difference between multivariate and multiple, as they relate to regression.

Multiple Regression

A regression analysis with one dependent variable and eight independent variables is NOT a multivariate regression model.  It’s a multiple regression model.

And believe it or not, it’s considered a univariate model.

This is uniquely important to remember if you’re an SPSS user. Choose Univariate GLM (General Linear Model) for this model, not multivariate.

I know this sounds crazy and misleading because why would a model that contains nine variables (eight Xs and one Y) be considered a univariate model?

It’s because of the fundamental idea in regression that Xs and Ys aren’t the same. We’re using the Xs to understand the mean and variance of Y. This is why the residuals in a linear regression are differences between predicted and actual values of Y. Not X.

(And of course, there is an exception, called Type II or Major Axis linear regression, where X and Y are not distinct. But in most regression models, Y has a different role than X).

It’s the number of Ys that tell you whether it’s a univariate or multivariate model. That said, other than SPSS, I haven’t seen anyone use the term univariate to refer to this model in practice. Instead, the assumed default is that indeed, regression models have one Y, so let’s focus on how many Xs the model has. This leads us to…

Simple Regression: A regression model with one Y (dependent variable) and one X (independent variable).

Multiple Regression: A regression model with one Y (dependent variable) and more than one X (independent variables).

References below.

Multivariate Regression

Multivariate analysis ALWAYS describes a situation with multiple dependent variables.

So a multivariate regression model is one with multiple Y variables. It may have one or more than one X variables. It is equivalent to a MANOVA: Multivariate Analysis of Variance.

Other examples of Multivariate Analysis include:

  • Principal Component Analysis
  • Factor Analysis
  • Canonical Correlation Analysis
  • Linear Discriminant Analysis
  • Cluster Analysis

But wait. Multivariate analyses like cluster analysis and factor analysis have no dependent variable, per se. Why is it about dependent variables?

Well,  it’s not really about dependency.  It’s about which variables’ mean and variance is being analyzed.  In a multivariate regression, we have multiple dependent variables, whose joint mean is being predicted by the one or more Xs. It’s the variance and covariance in the set of Ys that we’re modeling (and estimating in the Variance-Covariance matrix).

Note: this is actually a situation where the subtle differences in what we call that Y variable can help.  Calling it the outcome or response variable, rather than dependent, is more applicable to something like factor analysis.

So when to choose multivariate GLM?  When you’re jointly modeling the variation in multiple response variables.

References

In response to many requests in the comments, I suggest the following references.  I give the caveat, though, that neither reference compares the two terms directly. They simply define each one. So rather than just list references, I’m going to explain them a little.

  1. Neter, Kutner, Nachtsheim, Wasserman’s Applied Linear Regression Models, 3rd ed. There are, incidentally, newer editions with slight changes in authorship. But I’m citing the one on my shelf.

Chapter 1, Linear Regression with One Independent Variable, includes:

“Regression model 1.1 … is “simple” in that there is only one predictor variable.”

Chapter 6 is titled Multiple Regression – I, and section 6.1 is “Multiple Regression Models: Need for Several Predictor Variables.” Interestingly enough, there is no direct quotable definition of the term “multiple regression.” Even so, it’s pretty clear. Go read the chapter to see.

There is no mention of the term “Multivariate Regression” in this book.

2. Johnson & Wichern’s Applied Multivariate Statistical Analysis, 3rd ed.

Chapter 7, Multivariate Linear Regression Models, section 7.1 Introduction. Here it says:

“In this chapter we first discuss the multiple regression model for the prediction of a single response. This model is then generalized to handle the prediction of several dependent variables.” (Emphasis theirs).

They finally get to Multivariate Multiple Regression in Section 7.7. Here they “consider the problem of modeling the relationship between m responses, Y1, Y2, …,Ym, and a single set of predictor variables.”

Misuses of the Terms

I’d be shocked, however, if there aren’t some books or articles out there where the terms are not used or defined  the way I’ve described them here, according to these references. It’s very easy to confuse these terms, even for those of us who should know better.

And honestly, it’s not that hard to just describe the model instead of naming it. “Regression model with four predictors and one outcome” doesn’t take a lot more words and is much less confusing.

If you’re ever confused about the type of model someone is describing to you, just ask.

Read More Explanations of Confusing Statistical Terms.

First Published 4/29/09;
Updated 2/23/21 to give more detail.