Odds ratios are one of those concepts in statistics that are just really hard to wrap your head around. Although probability and odds both measure how likely it is that something will occur, probability is just so much easier to understand for most of us.
I’m not sure if it’s just a more intuitive concepts, or if it’s something were just taught so much earlier so that it’s more ingrained. In either case, without a lot of practice, most people won’t have an immediate understanding of how likely something is if it’s communicated through odds.
So why not always use probability?
The problem is that probability and odds have different properties that give odds some advantages in statistics. For example, in logistic regression the odds ratio represents the constant effect of a predictor X, on the likelihood that one outcome will occur.
The key phrase here is constant effect. In regression models, we often want a measure of the unique effect of each X on Y. If we try to express the effect of X on the likelihood of a categorical Y having a specific value through probability, the effect is not constant.
What that means is there is no way to express in one number how X affects Y in terms of probability. The effect of X on the probability of Y has different values depending on the value of X.
So while we would love to use probabilities because they’re intuitive, you’re just not going to be able to describe that effect in a single number. So if you need to communicate that effect to a research audience, you’re going to have to wrap your head around odds ratios.
What about Probabilities?
What you can do, and many people do, is to use the logistic regression model to calculate predicted probabilities at specific values of a key predictor, usually when holding all other predictors constant.
This is a great approach to use together with odds ratios. The odds ratio is a single summary score of the effect, and the probabilities are more intuitive.
Presenting probabilities without the corresponding odds ratios can be problematic, though.
First, when X, the predictor, is categorical, the effect of X can be effectively communicated through a difference or ratio of probabilities. The probability a person has a relapse in an intervention condition compared to the control condition makes a lot of sense.
But the p-value for that effect is not the p-value for the differences in probabilities.
If you present a table of probabilities at different values of X, most research audiences will, at least in their minds, make those difference comparisons between the probabilities. They do this because they’ve been trained to do this in linear models.
These differences in probabilities don’t line up with the p-values in logistic regression models, though. And this can get quite confusing.
Second, when X, the predictor is continuous, the odds ratio is constant across values of X. But probabilities aren’t.
It works exactly the same way as interest rates. I can tell you that an annual interest rate is 8%. So at the end of the year, you’ll earn $8 if you invested $100, or $40 if you invested $500. The rate stays constant, but the actual amount earned differs based on the amount invested.
Odds ratios work the same. An odds ratio of 1.08 will give you an 8% increase in the odds at any value of X.
Likewise, the difference in the probability (or the odds) depends on the value of X.
So if you do decide to report the increase in probability at different values of X, you’ll have to do it at low, medium, and high values of X. You can’t use a single number on the probability scale to convey the relationship between the predictor and the probability of a response.
It takes more than a single number, and it’s not “the effect of X on Y,” but sometimes it’s a better way to communicate what is really going on, especially to non-research audiences.
Bristy says
Is exp(B) value named as Odds ratio, ratio?
Kofi says
Hi, I have two questions
1) is it right to say that the coefficients of predictor variables (from a glm) represent effect sizes of the parameters on the response variable?
2) are the odds of different predictor variables comparable?
suppose you ran a biomial glm of Success as a function of Age, weight and height and the calculated odds of of each are Age=4.1, weight=2.4, height=1.9.
is it right to say that age has the highest effect on Success bacause it has the highest odds?
thank you
Augustine S. Korsor says
Please let someone help me out, what will be my interpretation when my coefficient is -0.279929 and my odds ratio .9723952? This is the effect of age on participation.
Ayush says
Hey Augustine!
The interpretation of the coefficient and the odds ratio is as follows.
The value – 0.279929 means that a change of one unit in the value of your predictor X would result in a 0.279929 in the response value in the opposite direction. Considering it in the age vs participation case a 1 year older participant is 0.279929 times less likely to participate. For odds ratio the value is calculated by dividing the probability of success by the probability of failure. Hence taking a variable X as probability of success and equating it with 0.9723952 will give you a sucess ratio of 0.49 or an odds of 97.2 to 100 for the sucess of the event. I hope this provides an adequate understanding.
Karen Grace-Martin says
Hi Augustine and Ayush,
Ayush is on the right track here, but I want to clarify a few things to make sure no one is confused. The -.2799 is on the log odds scale. So when X goes up one unit the log of the odds of the response goes down by .2799. Logs are not very intuitive, so that’s why we use the Odds Ratio instead.
So that Odd Ratio of .97 is still the effect of X going up one unit. For each one unit increase in the predictor X, the odds of a success occurring is only .97 times as big. In other words, it’s getting slightly smaller. So as X goes up, odds of success is going down.
Arun Kumar Bairwa says
Hi there!
I run logit model with a cross-sectional dataset of Indian individuals. I am using descriptive statistics of the same dataset to justify and interpret the estimations of logit model. However, I must report the descriptive statistics with sampling weights in the analysis. Using sampling weight creates big changes in the dataset and nullifies the findings of logit model. Rule says that I must use weight for reporting descriptive statistics.
Another side, I cannot use weighted logit model as it gives spurious results by inflating the total observations from 1 million to 1.6 billion. I will be grateful if you suggest some solution.
Susan says
Hi Guys
Trying to understand what are similarities and differences between odds ratio and regression coefficients?
Julie says
Just reading a paper and trying to understand the methodology and analysis of data.I haven’t had stats in 25 years. Thanks! I totally get it. Couldn’t do it but I totally get it!
David Salkever says
1. The logistic regression coefficient indicates how the LOG of the odds ratio changes with a 1-unit change in the explanatory variable; this is not the same as the change in the (unlogged) odds ratio though the 2 are close when the coefficient is small.
2. Your use of the term “likelihood” is quite confusing. “Likelihood” in the precise context of a “likelihood” function is IN FACT a probability. Outside of that precise meaning, I am not aware of any other clear definition of “likelihood” in statistics.
Karen Grace-Martin says
Hi David,
1. Yes, agreed. I wasn’t talking about coefficients here at all. Just the odds ratios that are derived from those coefficients. Most people find the coefficients on the log-odds scale too abstract to interpret, beyond being positive or negative.
2. It’s only confusing to people who know statistical theory, which is not the audience here. I’m not using “likelihood” to describe the likelihood function, which we use to estimate the model parameters. I’m just talking about measuring how likely, in layman’s terms, each outcome is. Probability and odds are both ways of measuring how likely (ie the likelihood) of a success, but on different scales. It’s analogous to degrees Fahrenheit and degrees Celsius both measuring temperature, but on different scales. I haven’t come up with a better generic term than likelihood that doesn’t involve the measurement scale, but I’m open to suggestions. Possibility? Propensity? The latter has a different technical statistical usage.
Rose says
Hi,
I’m a bit confused from this part: “An odds ratio of 1.08 will give you an 8% increase in the odds at any value of X.” My question is, what if the odds ratio is more more than 2?
For example, one of my logit coefficients is 3.0901, therefore the odds ratio should be 21.98. I was interpreting it as the increased likelihood of some event by a factor of 21.98, because this obviously can’t be transformed into percentage.
Do I interpret it correctly? Or am I missing something?
Thank you for your help!
Karen Grace-Martin says
Hi Rose, you have it right. You could change it to a percentage – 2098% increase(!) – but I agree that the way you have it is more interpretable.
student says
If we try to express the effect of X on the likelihood of a categorical Y having a specific value through probability, the effect is not constant.
the effect is not constant? wh?
Karen Grace-Martin says
It means you’ll get a different probability ratio at each value of X. It’s a curve, not a straight line.
ming says
how to compute Probability in Logistic Regression with stata?
Zach Hopkins says
You can evaluate these relationships graphically and numerically using the “margins” commands in STATA. This website gives some nice examples and explanations: https://www.stata.com/meeting/germany13/abstracts/materials/de13_jann.pdf
JOelo says
You write
The key phrase here is constant effect. In regression models, we often want a measure of the unique effect of each X on Y. If we try to express the effect of X on the likelihood of a categorical Y having a specific value through probability, the effect is not constant.
But sometimes dont you want the effect of x in the cat var to not be constant. Like if you are predictive modeling a individual x that shows different behavior based on high or low x
Dr. Wilson says
Thank you. This helped me explain to reviewer 1 why the request for predicted probabilities rather than odds ratios was respectfully declined. Succinct, clear, and intuitive. Much appreciated.
tilahu eshetu says
how to interpret odds ratio in ordered multinational logit
Dr. Girija says
This was an extremely clear explanation explained in a simple manner.
Kiza Musioka says
Can someone tell me how to transform odds ratios into logistic beta coefficients? Suppose the odds of becoming diabetic when some one is obese is 4, what would be the corresponding value of beta coefficient in a logistic regression?
Thank you so much for any clue.
Kiza.
Cynthia says
Hi kiza, I suggest you run a linear regression . This will help you get the beta coefficient for that predictor variable.
Santiago says
You don’t really need to talk about coefficients, when you do so you only assess for the effect on the linearized prob, or if you are using a logit over the ln of the odds ratios.
When you interpred an odd ratio, you see what the effect of any determinant is over the probability of sth you’re researching.
So in your case, you’re interested in knowing what’s are the odds of becoming diabetic when you have the presence of diabetes, so in you example it goes like: you have 4:1 odds of becoming diabetic if you are obese, or obesity has an constant effect on diabetes represented by the increase of the chaces of being diabetic (4 times more probable to be diabetic if you’re obese)…
I hope I helped you.
Alex P says
This was an extremely intuitive explanation. I couldn’t find an answer like this elsewhere. Thank you!
tesfaye abera says
How to interpret multinomial logistic?
Karen says
Hi Tesfaye,
That’s a really good question, and how I’d answer depends on say, whether you already understand binary logistic regression.
I would suggest starting with these two webinar recordings:
Binary, Ordinal, and Multinomial Logistic Regression for Categorical Outcomes
Understanding Probability, Odds, and Odds Ratios in Logistic Regression.
They’re both free.
The former describes multinomial logistic regression and how interpretation differs from binary. The latter goes into more detail about how to interpret an odds ratio.
Karen