Every so often I point out to a client who exclusively uses menus in SPSS that they can (and should) hit the Paste button instead of OK. Many times, the client never realized it was there.
I am here today to tell you that it is there, and it is wonderful. For a few reasons.
When you use the menus in SPSS, you’re really taking a shortcut. You’re telling SPSS which syntax commands, along with which options, you want to run.
Clicking OK at the end of a dialog box will run the menu options you just picked. You may never see the underlying commands that SPSS just ran.
If instead you hit Paste, those command won’t automatically be run, but will instead the code to run those commands will be (more…)
One of the things I love about MIXED in SPSS is that the syntax is very similar to GLM. So anyone who is used to the GLM syntax has just a short jump to learn writing MIXED.
Which is a good thing, because many of the concepts are a big jump.
And because the MIXED dialogue menus are seriously unintuitive, I’ve concluded you’re much better off using syntax.
I was very happy a few years ago when, with version 19, SPSS finally introduced generalized linear mixed models so SPSS users could finally run logistic regression or count models on clustered data.
But then I tried it, and the menus are even less intuitive than in MIXED.
And the syntax isn’t much better. In this case, the syntax structure is quite different than for MIXED. (more…)
The ICC, or Intraclass Correlation Coefficient, can be very useful in many statistical situations, but especially so in Linear Mixed Models.
Linear Mixed Models are used when there is some sort of clustering in the data.
Two common examples of clustered data include:
- individuals were sampled within sites (hospitals, companies, community centers, schools, etc.). The site is the cluster.
- repeated measures or longitudinal data where multiple observations are collected from the same individual. The individual is the cluster in which multiple observations are (more…)
I received the following email from a reader after sending out the last article: Opposite Results in Ordinal Logistic Regression—Solving a Statistical Mystery.
And I agreed I’d answer it here in case anyone else was confused.
Karen’s explanations always make the bulb light up in my brain, but not this time.
With either output,
The odds of 1 vs > 1 is exp[-2.635] = 0.07 ie unlikely to be 1, much more likely (14.3x) to be >1
The odds of £2 vs > 2 exp[-0.812] =0.44 ie somewhat unlikely to be £2, more likely (2.3x) to be >2
SAS – using the usual regression equation
If NAES increases by 1 these odds become (more…)
A number of years ago when I was still working in the consulting office at Cornell, someone came in asking for help interpreting their ordinal logistic regression results.
The client was surprised because all the coefficients were backwards from what they expected, and they wanted to make sure they were interpreting them correctly.
It looked like the researcher had done everything correctly, but the results were definitely bizarre. They were using SPSS and the manual wasn’t clarifying anything for me, so I did the logical thing: I ran it in another software program. I wanted to make sure the problem was with interpretation, and not in some strange default or (more…)
One great thing about logistic regression, at least for those of us who are trying to learn how to use it, is that the predictor variables work exactly the same way as they do in linear regression.
Dummy coding, interactions, quadratic terms–they all work the same way.
Dummy Coding
In pretty much every regression procedure in every stat software, the default way to code categorical variables is with dummy coding.
All dummy coding means is recoding the original categorical variable into a set of binary variables that have values of one and zero. You may find it helpful to (more…)