Updated 12/20/2021
Despite its popularity, interpreting regression coefficients of any but the simplest models is sometimes, well….difficult.
So let’s interpret the coefficients in a model with two predictors: a continuous and a categorical variable. The example here is a linear regression model. But this works the same way for interpreting coefficients from any regression model without interactions.
A linear regression model with two predictor variables results in the following equation:
Yi = B0 + B1*X1i + B2*X2i + ei.
The variables in the model are:
- Y, the response variable;
- X1, the first predictor variable;
- X2, the second predictor variable; and
- e, the residual error, which is an unmeasured variable.
The parameters in the model are:
- B0, the Y-intercept;
- B1, the first regression coefficient; and
- B2, the second regression coefficient.
One example would be a model of the height of a shrub (Y) based on the amount of bacteria in the soil (X1) and whether the plant is located in partial or full sun (X2).
Height is measured in cm. Bacteria is measured in thousand per ml of soil. And type of sun = 0 if the plant is in partial sun and type of sun = 1 if the plant is in full sun.
Let’s say it turned out that the regression equation was estimated as follows:
Y = 42 + 2.3*X1 + 11*X2
Interpreting the Intercept
B0, the Y-intercept, can be interpreted as the value you would predict for Y if both X1 = 0 and X2 = 0.
We would expect an average height of 42 cm for shrubs in partial sun with no bacteria in the soil. However, this is only a meaningful interpretation if it is reasonable that both X1 and X2 can be 0, and if the data set actually included values for X1 and X2 that were near 0.
If neither of these conditions are true, then B0 really has no meaningful interpretation. It just anchors the regression line in the right place. In our case, it is easy to see that X2 sometimes is 0, but if X1, our bacteria level, never comes close to 0, then our intercept has no real interpretation.
Interpreting Coefficients of Continuous Predictor Variables
Since X1 is a continuous variable, B1 represents the difference in the predicted value of Y for each one-unit difference in X1, if X2 remains constant.
This means that if X1 differed by one unit (and X2 did not differ) Y will differ by B1 units, on average.
In our example, shrubs with a 5000/ml bacteria count would, on average, be 2.3 cm taller than those with a 4000/ml bacteria count. They likewise would be about 2.3 cm taller than those with 3000/ml bacteria, as long as they were in the same type of sun.
(Don’t forget that since the measurement unit for bacteria count is 1000 per ml of soil, 1000 bacteria represent one unit of X1).
Interpreting Coefficients of Categorical Predictor Variables
Similarly, B2 is interpreted as the difference in the predicted value in Y for each one-unit difference in X2 if X1 remains constant. However, since X2 is a categorical variable coded as 0 or 1, a one unit difference represents switching from one category to the other.
B2 is then the average difference in Y between the category for which X2 = 0 (the reference group) and the category for which X2 = 1 (the comparison group).
So compared to shrubs that were in partial sun, we would expect shrubs in full sun to be 11 cm taller, on average, at the same level of soil bacteria.
Interpreting Coefficients when Predictor Variables are Correlated
Don’t forget that each coefficient is influenced by the other variables in a regression model. Because predictor variables are nearly always associated, two or more variables may explain some of the same variation in Y.
Therefore, each coefficient does not measure the total effect on Y of its corresponding variable. It would if it were the only predictor variable in the model. Or if the predictors were independent of each other.
Rather, each coefficient represents the additional effect of adding that variable to the model, if the effects of all other variables in the model are already accounted for.
This means that adding or removing variables from the model will change the coefficients. This is not a problem, as long as you understand why and interpret accordingly.
Interpreting Other Specific Coefficients
I’ve given you the basics here. But interpretation gets a bit trickier for more complicated models, for example, when the model contains quadratic or interaction terms. There are also ways to rescale predictor variables to make interpretation easier.
So here is some more reading about interpreting specific types of coefficients for different types of models:
Ilana says
I have two binary independent variables how can I determine other then looking at the coefficient that one is stronger than the other? is there some test I need to do?
Karen Grace-Martin says
Ilana, it really depends on whether those two predictor variables are measured on the same scale. If they’re not, you would need to standarize them to compare.
Sibo Zhang says
Hi Karen, what if someone is trying to compare coefficients between continuous variable and categorical variable? One can standarize the continuous variable so that one unit change means 1 s.d., but does this mean it can compare to categorical variable? where the unit change I guess switch between categories.
Karen Grace-Martin says
You can’t really make that comparison. It’s not really meaningful.
Jaume says
Absolutely clarifying, both this post and the one on interaction.
Thank you very much.
Anna says
Hey Karen! Thanks for your explanation.
What if I have a regression results table where race is coded as 1=black, 2= white and the coefficient for “race” is, for example, .13? How do I know how to interpret this? Is it possible to interpret this in magnitude? Thanks for your reply.
Karen Grace-Martin says
Anna, you’d have to make sure that you’ve told your software that race is categorical. If you did, your software will dummy code it for you. If you can’t do that (depending on which software and which procedure you’re using) you’ll have to recode that variable into 1s and 0s.
See this: https://www.theanalysisfactor.com/making-dummy-codes-easy-to-keep-track-of/
Paul says
Thanks for the excellent explanation. For clarity, I have a continuous dependent variable (annual change in quality of life score) and a binary independent variable (Control = 0, Treatment = 1), amongst other covariates. My coefficient is 1.3 (CI 0.41 to 2.19). Does this mean for each 1 point increase in Treatment group QoL score there is on average a 1.3 increase in control group? I am puzzled that the lower CI is 0.41. Would this mean that if the lower CI was true then there would be a 0.4 increase in control for each 1 point increase in treatment? Or is it that on average the QoL score is 0.4 higher for the control group? Many thanks
Manu Yaw says
How do I enter a categorical independent variable of 4 levels in stats.
For example , marital status (single, married, divorced, separated)
Thank you
Karen Grace-Martin says
Hi Manu,
The short answer is you need three Yes/No variables, each coded 1=yes and 0=no, for three of your four categories. It would take a while to walk you through this. We have a training on it in our membership program: https://www.theanalysisfactor.com/member-dummy-effect-coding/
Niru says
Interesting read. I have a general question. Suppose we are comparing the coefficients of different models. Let’s say model 1 contains variables x1,x2,x3 and model two contains x1,x2,x3,x5. I do know that if there is a drastic difference in coefficients then there’s a potential multicollinearity problem. What if regardless of what’s in the model and what’s added, and the coefficients do not change. Does this simply imply there’s no multicollinearity?
Karen Grace-Martin says
Yes.
T says
Thanks for this, terminology and notation are the most impenetrable parts of understanding statistics.
Liz says
Hi,
I have a dichotomous dependent variable and running a logitistic regression. The predictor of interest is a random effect of medical group. The dependent variable is quitter (Y/N) of smoking.
1. How do I interpret the beta coefficient for medical group? For example, for medical group AX it is -.62.
2. I want to adjust my percentage of quitters for medical group AX by -.62. Do I add this to the total number of quitters in AX or the percentage of quitters in AX or something else?
Karen Grace-Martin says
Hi Liz,
The beta coefficient in a logistic regression is difficult to interpret because it’s on a log-odds scale. I would suggest you start with this free webinar which explains in detail how to interpret odds ratios instead: Understanding Probability, Odds, and Odds Ratios in Logistic Regression
IB says
how do I interpret my intercept when my independent variable is gender and my dependent is continuous as it’s a big number and I don’t get it
Karen Grace-Martin says
Hi IB,
See this: https://www.theanalysisfactor.com/interpret-the-intercept/
anila says
How to write the results of multiple regression analysis in our PhD thesis according to APA style? Can I have any example.
Karen says
Hi Anila, hmm. It’s been a while since I’ve had to use APA style.
Mark says
How should I interpret the effects of an independent variable “age” (a continuous variable coded to range from (0) for the youngest to (1) for the oldest respondents) on my dependent variable “income” given a beta coefficient of 2.688823 ?
Lanre says
Older respondents predicted an increase in income by an average of 2.69
Karen Grace-Martin says
It sounds like oldest and youngest are categories. I wouldn’t use the term increase. I would say the oldest respondents have a mean income that is 2.69 units higher than the youngest respondents.
April says
If you have a direction hypothesis for an IV, is it acceptable divide the two-tailed p-value for the t-value to obtain the one-tailed significance?
Juliet says
How do you interpret coefficients on discreet variables. For example, if sunlight was coded as 0 – no sunlight, 1 – partial sunlight and 2 – full sunlight, how would you interpret the coefficient on this independent variable?
Jon says
To handle categorical variables like in your example you would encode then into n-1 binary variables where n is the number of categories, see here for example: http://appliedpredictivemodeling.com/blog/2013/10/23/the-basics-of-encoding-categorical-data-for-predictive-models
Ahmed says
hello
I used linear regression to control for IQ. How can I know if differences between two groups remain the same?
ENDALE Y says
Please make it easy and understandable.
Akosua says
Please how do you interprete a regression result that show zero as the coefficient. Thank you
Deniz says
If B coefficient is 0 then, there is no relationship between dependent and independent variables.
Kanu says
Does this means that a B coefficient just over 0 lets say 0.58 isn’t as good as the one which is 1.11?
What does the signs of the B coefficient’s means. Is it inverse association (-ve) and direct association (+ve) to the dependent variable?
Rupon Basumatary says
Makes easily understandable.
John says
In interpreting the coefficients of categorical predictor variables, what if X2 had several levels (several categories) instead of 0 and 1. Say, the soil was green, red, yellow or blue. How would you interpret quantitatively the differences in the coefficients? How much higher is the plant grown in green soil vs red soil?
Gio says
John, you can always transform a multi level categorical variable in (levels-1) two level categorical variables.
In your example the soil varaible would become:
– Soil_green (1,0)
– Soil_red (1,0)
– Soil_Yellow (1,0)
you do not need a Soil_Blue varaible because when all the above are 0 than you know it is a bout blue Soil
him says
FYI – The above is commonly referred to as “dummy coding”
Martins Ahmed says
Really appreciate this exposition. It has to a greater extent cleared some difficulties I have been experiencing when it comes to interpreting the results of coefficient of linear regression.