Logistic regression models can seem pretty overwhelming to the uninitiated. Why not use a regular regression model? Just turn Y into an indicator variable–Y=1 for success and Y=0 for failure.
For some good reasons.
1.It doesn’t make sense to model Y as a linear function of the parameters because Y has only two values. You just can’t make a line out of that (at least not one that fits the data well).
2. The predicted values can be any positive or negative number, not just 0 or 1.
3. The values of 0 and 1 are arbitrary.The important part is not to predict the numerical value of Y, but the probability that success or failure occurs, and the extent to which that probability depends on the predictor variables.
So okay, you say. Why not use a simple transformation of Y, like probability of success–the probability that Y=1.
Well, that doesn’t work so well either.
Why not?
1. The right hand side of the equation can be any number, but the left hand side can only range from 0 to 1.
2. It turns out the relationship is not linear, but rather follows an S-shaped (or sigmoidal) curve.
To obtain a linear relationship, we need to transform this response too, Pr(success).
As luck would have it, there are a few functions that:
1. are not restricted to values between 0 and 1
2. will form a linear relationship with our parameters
These functions include:
•Arcsine
All three of these work just as well, but (believe it or not) the Logit function is the easiest to interpret.
But as it turns out, you can’t just run the transformation then do a regular linear regression on the transformed data. That would be way too easy, but also give inaccurate results. Logistic Regression uses a different method for estimating the parameters, which gives better results–better meaning unbiased, with lower variances.