Why does ANOVA give main effects in the presence of interactions, but Regression gives marginal effects?
What are the advantages and disadvantages of dummy coding and effect coding? When does it make sense to use one or the other?
How does each one work, really?
In this webinar, we’re going to go step-by-step through a few examples of how dummy and effect coding each tell you different information about the effects of categorical variables, and therefore which one you want in each situation.
Note: This training is an exclusive benefit to members of the Statistically Speaking Membership Program and part of the Stat’s Amore Trainings Series. Each Stat’s Amore Training is approximately 90 minutes long.
About the Instructor

Karen Grace-Martin helps statistics practitioners gain an intuitive understanding of how statistics is applied to real data in research studies.
She has guided and trained researchers through their statistical analysis for over 15 years as a statistical consultant at Cornell University and through The Analysis Factor. She has master’s degrees in both applied statistics and social psychology and is an expert in SPSS and SAS.
Not a Member Yet?
It’s never too early to set yourself up for successful analysis with support and training from expert statisticians.
Just head over and sign up for Statistically Speaking.
You'll get access to this training webinar, 130+ other stats trainings, a pathway to work through the trainings that you need — plus the expert guidance you need to build statistical skill with live Q&A sessions and an ask-a-mentor forum.
Not too long ago, a client asked for help with using Spotlight Analysis to interpret an interaction in a regression model.
Spotlight Analysis? I had never heard of it.
As it turns out, it’s a (snazzy) new name for an old way of interpreting an interaction between a continuous and a categorical grouping variable in a regression model. (more…)
In our last article, we learned about model fit in Generalized Linear Models on binary data using the glm() command. We continue with the same glm on the mtcars data set (regressing the vs variable on the weight and engine displacement).
Now we want to plot our model, along with the observed data.
Although we ran a model with multiple predictors, it can help interpretation to plot the predicted probability that vs=1 against each predictor separately. So first we fit a glm for only (more…)
In the last article, we saw how to create a simple Generalized Linear Model on binary data using the glm() command. We continue with the same glm on the mtcars data set (more…)
Ordinary Least Squares regression provides linear models of continuous variables. However, much data of interest to statisticians and researchers are not continuous and so other methods must be used to create useful predictive models.
The glm() command is designed to perform generalized linear models (regressions) on binary outcome data, count data, probability data, proportion data and many other data types.
In this blog post, we explore the use of R’s glm() command on one such data type. Let’s take a look at a simple example where we model binary data.
(more…)
Need to run a logistic regression in SPSS? Turns out, SPSS has a number of procedures for running different types of logistic regression.
Some types of logistic regression can be run in more than one procedure. For some unknown reason, some procedures produce output others don’t. So it’s helpful to be able to use more than one.
Logistic Regression
Logistic Regression can be used only for binary dependent (more…)