Effect size statistics are expected by many journal editors these days.
If you’re running an ANOVA, t-test, or linear regression model, it’s pretty straightforward which ones to report.
Things get trickier, though, once you venture into other types of models.
(more…)
A number of years ago when I was still working in the consulting office at Cornell, someone came in asking for help interpreting their ordinal logistic regression results.
The client was surprised because all the coefficients were backwards from what they expected, and they wanted to make sure they were interpreting them correctly.
It looked like the researcher had done everything correctly, but the results were definitely bizarre. They were using SPSS and the manual wasn’t clarifying anything for me, so I did the logical thing: I ran it in another software program. I wanted to make sure the problem was with interpretation, and not in some strange default or (more…)
One great thing about logistic regression, at least for those of us who are trying to learn how to use it, is that the predictor variables work exactly the same way as they do in linear regression.
Dummy coding, interactions, quadratic terms–they all work the same way.
Dummy Coding
In pretty much every regression procedure in every stat software, the default way to code categorical variables is with dummy coding.
All dummy coding means is recoding the original categorical variable into a set of binary variables that have values of one and zero. You may find it helpful to (more…)
I received an e-mail from a researcher in Canada that asked about communicating logistic regression results to non-researchers. It was an important question, and there are a number of parts to it.
With the asker’s permission, I am going to address it here.
To give you the full context, she explained in a follow-up email that she is communicating to a clinical audience who will be using the results to make clinical decisions. They need to understand the size of an effect that an intervention will provide. She refers to an output I presented in my webinar on Probability, Odds, and Odds Ratios, which you can view free here.
Question:
I just went through the two lectures re: logistic regression and prob/odds/odds ratios. I completely understand (more…)
Odds ratios are one of those concepts in statistics that are just really hard to wrap your head around. Although probability and odds both measure how likely it is that something will occur, probability is just so much easier to understand for most of us.
I’m not sure if it’s just a more intuitive concepts, or if it’s something were just taught so much earlier so that it’s more ingrained. In either case, without a lot of practice, most people won’t have an immediate understanding of how likely something is if it’s communicated through odds.
So why not always use probability? (more…)
Logistic regression models can seem pretty overwhelming to the uninitiated. Why not use a regular regression model? Just turn Y into an indicator variable–Y=1 for success and Y=0 for failure.
For some good reasons.
1.It doesn’t make sense to model Y as a linear function of the parameters because Y has only two values. You just can’t make a line out of that (at least not one that fits the data well).
2. The predicted values can be any positive or negative number, not just 0 or 1.
3. The values of 0 and 1 are arbitrary.The important part is not to predict the numerical value of Y, but the probability that success or failure occurs, and the extent to which that probability depends on the predictor variables.
So okay, you say. Why not use a simple transformation of Y, like probability of success–the probability that Y=1.
Well, that doesn’t work so well either.
Why not?
1. The right hand side of the equation can be any number, but the left hand side can only range from 0 to 1.
2. It turns out the relationship is not linear, but rather follows an S-shaped (or sigmoidal) curve.
To obtain a linear relationship, we need to transform this response too, Pr(success).
As luck would have it, there are a few functions that:
1. are not restricted to values between 0 and 1
2. will form a linear relationship with our parameters
These functions include:
•Arcsine
•Probit
•Logit
All three of these work just as well, but (believe it or not) the Logit function is the easiest to interpret.
But as it turns out, you can’t just run the transformation then do a regular linear regression on the transformed data. That would be way too easy, but also give inaccurate results. Logistic Regression uses a different method for estimating the parameters, which gives better results–better meaning unbiased, with lower variances.