Suppose you are asked to create a model that will predict who will drop out of a program your organization offers. You decide to use a binary logistic regression because your outcome has two values: “0” for not dropping out and “1” for dropping out.
Most of us were trained in building models for the purpose of understanding and explaining the relationships between an outcome and a set of predictors. But model building works differently for purely predictive models. Where do we go from here?
Explanatory Modeling
In explanatory modeling we are interested in identifying variables that have a scientifically meaningful and statistically significant relationship with an outcome.
The primary goal is to test the theoretical hypotheses so there is an emphasis on both theoretically meaningful relationships and determining whether each relationship is statistically significant (you know that wonderful feeling you get when your predictors have p values less than 0.05).
Some of the steps in explanatory modeling include fitting potentially theoretically important predictors, checking for statistical significance, evaluating effect sizes, and running diagnostics.
Predictive Modeling
In predictive modeling our interest is different. Here the goal is to use the associations between predictors and the outcome variable to generate good predictions for future outcomes.
As a result, predictive models are created very differently than explanatory models. The primary goal is predictive accuracy.
Being able to explain why a variable “fits” in the model is left for discussion over beers after work. This gives you the latitude to use predictors that may not have any theoretical value.
Variables that are used in a predictive model are based on association, not statistical significance or scientific meaning.
There are times when statistically significant variables will not be included in a predictive model. A significant predictor that adds no predictive benefit is excluded.
If the predictor is significant but only observable immediately before or at the time of the observed outcome, it cannot be used for predictions.
For example, theoretical models have shown that water temperatures are a highly significant factor in determining whether a tropical storm turns into a hurricane. That variable is not useful in a prediction model of the expected number of hurricanes during the upcoming season because it can only be measured immediately before an impending hurricane.
That’s too late.
A key strategy for successful predictive modeling is to explore. Changing the effect of a continuous predictor by squaring or taking the square root of its value is one approach. The primary limitation for including a predictor in the model is its availability for future model running.
The primary risk when creating a predictive model is to avoid “overfitting.” Overfitting is a result of creating a model that fits the current sample so perfectly that it may not be a good representation of the population.
How can you reduce this risk?
Use half of your data to create your model. Then test your model on the other half.
The data used to create the model is generally known as the “training set.” The data used for testing the model is called the “testing or validation set.” This process is known as “cross validation.”
You may find that you will need to modify your model to better fit both sets of data.
Jeff Meyer is a statistical consultant with The Analysis Factor, a stats mentor for Statistically Speaking membership, and a workshop instructor. Read more about Jeff here.
mahdi azhdari says
Hi ,where can we find concept of the structural equations models in this topic? are they subset of the predictive models?
Sujay Dutta says
SEM has been typically used for explanatory modeling. Usually, specialized techniques are used for predictive modeling (e.g., ridge regression, random forests, neural net, etc.).
Inekwe Murumba says
Thanks Jeff for the article. It is quite revealing. How does one cite this material? It does not include year which is a very vital aspect of citation.
TAF Support says
Hi Inekwe,
Generally, all of our pages can be cited by leaving out the published date.
For APA, simply omit the publish date.
For MLA, specifically, add “Accessed 14 Sep. 2020.” to the end, changed to the day you accessed the material.
Sujay Dutta says
Professor Galit Shmueli has an eye-opening article and some related Youtube videos on the topic. Her article “To Explain or To Predict?” was published in Statistical Science in 2010 and can be found here: https://www.stat.berkeley.edu/~aldous/157/Papers/shmueli.pdf
Teena says
Thank you Jeff, great article, my doubts got cleared after this!
Odwa Nondlozi says
I found this article very informative as I had little knowledge on this topic, thank you
Rebecca says
Thanks Jeff, this was very helpful! Is overfitting only a potential problem in predictive models then, or can it be a problem in explanatory models, too?
Jeff Meyer says
Hi, glad you found it worth reading. Yes, over fitting can be an issue with an exploratory model as well. Keeping predictors with a very small effect size (in a continuous model) or odds ratio close to one (logistic model) or IRR close to one (count model) can be problematic if you try to replicate your results with a different data set.
Rebecca says
That makes sense, thanks Jeff!