You may have noticed conflicting advice about whether to leave insignificant effects in a model or take them out in order to simplify the model.
One effect of leaving in insignificant predictors is on p-values–they use up precious df in small samples. But if your sample isn’t small, the effect is negligible.
The bigger effect is on interpretation, and really the above cases are about whether it aids interpretation to leave them in. Models do get so cluttered it’s hard to figure out what’s going on, and it makes sense to eliminate effects that aren’t serving a purpose, but even insignificant effects can have a purpose.
So these are three situations where there is a purpose in showing that specific predictors were not significant and to measure their coefficient anyway:
1. Expected control variables. You need to show that you’ve controlled for them.
In many fields, there are control variables that everyone expects to see.
- Age in medical studies
- Race, income, education in sociological studies
- Socioeconomic status in education studies
The examples go on and on.
If you take these expected controls out, you will just get criticism for not including them. And it may be interesting to show that in this sample and with these variables, these controls weren’t significant.
2. Predictors you have specific hypotheses about.
Another example is if the point of a model is to specifically test a predictor–you have a hypothesis about a predictor and it’s meaningful to show that it’s not significant. In that case, I would leave it in, even if not significant.
3. Items involved in higher-order terms
When you take out a term that is involved in something higher, like a two-way interaction that is part of a three-way interaction, you actually change the meaning of the higher order term. The sums of squares for each higher-order term is based on comparisons to specific means and represents variation around that mean.
If you take out the lower order term, that variation has to be covered somewhere, and it’s usually not where you expect it. For example, a two-way interaction represents the variation in cell means around the main effect means. But if the variation between the main effect means isn’t measured with a main effect term, it ends up in the interaction, and that interaction doesn’t reflect the variation it did if the main effect were in the model.
So it’s not that it’s wrong, but it changes the meaning of the interaction. For that reason, most people recommend leaving those lower-order effects in.
The main point here is there are often good reasons to leave insignificant effects in a model. The p-values are just one piece of information. You may be losing important information by automatically removing everything that isn’t significant.
John Rogers says
Hi, Karen,
Thanks for what I found to be sensible and thoughtful advice on whether to leave in or take out variables that are not significant in the inter-correlational matrix in a multiple regression. I’m going to leave them in (and get my students to do the same thing). After all, we start out with a model that we believe in, based on previous research (or we should do), and it is this model that we should look to test right through the analysis. Also, null findings need to be discussed as well as significant findings (in my subject, psychology, it might be argued that they are often not discussed enough).
Thanks,
John
Paul Murphy says
Hi Karen
Thanks this is useful. I am using a regression model to forecast loan delinquency. Are there more reasons for retaining insignificant variables in a forecasting model? I’m sure I’ve seen something on this site about this but I cannot find it now. If you point me to this I would be very grateful.
Many Thanks
Best wishes
Paul Murphy
Karen Grace-Martin says
Hi Paul,
In a predictive model, significance doesn’t really matter one way or the other. Whatever gives you the best predictions is the best model.
Nicho says
Hi Karen,
Can you share an article that eleborates why insignificance of each explanatory variable does not matter and its okay to proceed with the prediction?
L says
What is the importance of dropping non-sig interactions? Above you elude to it affecting p and taking up dfs- how does it affect p exactly? Also, do you have any peer-reviewed articles or book pages that specify why it is important to drop non-sig interactions? I need some literature to cite in my dissertation about this.
Thanks in advance!!
οπου υπαρχει αγαπη says
I used to be recommended this website through my cousin. I am not
certain whether or not this put up is written by way of him as no
one else understand suc certan about my difficulty.
You are incredible! Thank you!
Bhargav says
Hey Karen,
While building a model, when I add a function or an interaction effect there is an increase in my adjusted R2 value, However the term that I have added is not significant. In this case, can i still have the term or should i omit that term.
Joshua says
I’m running an ARDL for factors affecting growth of exports but surprisingly world price is giving a wrong sign in the long run although it has a correct sign in the short run. However it is insignificant in both cases.Kindly advise on whether I can just ignore it and go ahead with writing my thesis .All the post estimation diagnostics are ok
Nitzan says
Hi Karen,
am performing linear regression analysis in Eviews and half of my variables are unfortunately with p.value larger than 0.05 , even though i dropped out one variable after detecting a multicollinearity it still doesnt change the p.value of the other variables.
My research dealing with the effect of explanatory variables such as : R&D Expenses, Company Size, repeated/new partner in alliances ,Total alliances and some more, on the number of patents per year in the pharma industry.My question is what is the implication in that kind of case? what can i do? this is a case study on 3 companies and the size of the sample is 66 observation .
Thank you in advance
Nitzan.
Kate says
Hi Karen,
A great article – however I’m having trouble applying it to my own data.
In cox regression survival analysis with two categorical (binary) IVs/factors, I try to include an interaction term between my two factors and all significance “disappears” – to explain:
CR Output (SPSS): With no interaction term specified:
Factor 1 B=-0.354, Sig=0.288
Factor 2 B=-0.753, Sig=0.025
So it looks like Factor 2 has a significant effect on my outcome variable.
However,
CR Output (SPSS): Including an interaction term:
Factor 1 B=-0.124, Sig=0.786
Factor 2 B=-0.528, Sig=0.246
Factor1*Factor2 B=-0.496, Sig=0.609
No significant effects of anything! 🙁
The research question asks about the effects of each factor indivdually, and also we are interested to see if there is an interaction between factors. I am unsure how to report this result, seeing as there does seem to be a significant effect for Factor 2, but I think I do need to leave the interaction term in the model which makes the Factor 2 effect insignificant.
Can I make any statements about how Factor 1 and (more importantly) Factor 2 affect survival while explaining there was no significant interaction between the two? Or is it reasonable to say that we tested for an interaction and didn’t find a significant one, and go back to the model without the interaction term?
Cheers
Karen says
Hi Kate,
You don’t mention the coding of the factors, but I’m going to guess they’re dummy coded. When they are, and you have an interaction in the model, the values of the “main effects” are not main effects. So for example, Factor 1 B in the first model is the effect of F1 at ANY value of F2. But in the second model, Factor 1 B is the effect of factor 1 ONLY when Factor 2=0.
Here’s an article I wrote on it:
https://www.theanalysisfactor.com/interpreting-interactions-in-regression/
The other option is to use effect coding (1/-1) instead of dummy coding (1/0). It will allow each first-order effect to be at the mean of the other, aka, a main effect, but the Bs themselves aren’t very interpretable. This is what ANOVA uses.
JC says
So if i retain those insignificant terms in my final model, do i need to interpret them? For example, in multiple regression I found age was not significant, do i need to interpret this in the usual way?
Thanks.
Karen says
Hi JC,
You would interpret it as a null effect. So if you had a coefficient that was b=2.5, but insignificant, you would interpret that as being 0. So you would not worry if, for example, the sign was opposite what you expected.
Karen
Marc Fey says
How do I reach your blog? I keep getting close and then turned away. I’d like to read 528.
Marc
Karen says
HI Marc,
You’re on the blog. What do you mean by 528? It’s possible something got renamed when we did our website update last fall. I tried to catch everything, but maybe you found something I missed.
Karen