One of the things I love about MIXED in SPSS is that the syntax is very similar to GLM. So anyone who is used to the GLM syntax has just a short jump to learn writing MIXED.
Which is a good thing, because many of the concepts are a big jump.
And because the MIXED dialogue menus are seriously unintuitive, I’ve concluded you’re much better off using syntax.
I was very happy a few years ago when, with version 19, SPSS finally introduced generalized linear mixed models so SPSS users could finally run logistic regression or count models on clustered data.
But then I tried it, and the menus are even less intuitive than in MIXED.
And the syntax isn’t much better. In this case, the syntax structure is quite different than for MIXED. (more…)
The ICC, or Intraclass Correlation Coefficient, can be very useful in many statistical situations, but especially so in Linear Mixed Models.
Linear Mixed Models are used when there is some sort of clustering in the data.
Two common examples of clustered data include:
- individuals were sampled within sites (hospitals, companies, community centers, schools, etc.). The site is the cluster.
- repeated measures or longitudinal data where multiple observations are collected from the same individual. The individual is the cluster in which multiple observations are (more…)
I received the following email from a reader after sending out the last article: Opposite Results in Ordinal Logistic Regression—Solving a Statistical Mystery.
And I agreed I’d answer it here in case anyone else was confused.
Karen’s explanations always make the bulb light up in my brain, but not this time.
With either output,
The odds of 1 vs > 1 is exp[-2.635] = 0.07 ie unlikely to be 1, much more likely (14.3x) to be >1
The odds of £2 vs > 2 exp[-0.812] =0.44 ie somewhat unlikely to be £2, more likely (2.3x) to be >2
SAS – using the usual regression equation
If NAES increases by 1 these odds become (more…)
A number of years ago when I was still working in the consulting office at Cornell, someone came in asking for help interpreting their ordinal logistic regression results.
The client was surprised because all the coefficients were backwards from what they expected, and they wanted to make sure they were interpreting them correctly.
It looked like the researcher had done everything correctly, but the results were definitely bizarre. They were using SPSS and the manual wasn’t clarifying anything for me, so I did the logical thing: I ran it in another software program. I wanted to make sure the problem was with interpretation, and not in some strange default or (more…)
One great thing about logistic regression, at least for those of us who are trying to learn how to use it, is that the predictor variables work exactly the same way as they do in linear regression.
Dummy coding, interactions, quadratic terms–they all work the same way.
Dummy Coding
In pretty much every regression procedure in every stat software, the default way to code categorical variables is with dummy coding.
All dummy coding means is recoding the original categorical variable into a set of binary variables that have values of one and zero. You may find it helpful to (more…)
In all linear regression models, the intercept has the same definition: the mean of the response, Y, when all predictors, all X = 0.
But “when all X=0” has different implications, depending on the scale on which each X is measured and on which terms are included in the model.
So let’s specifically discuss the meaning of the intercept in some common models: (more…)