Karen Grace-Martin

The Difference Between R-squared and Adjusted R-squared

August 22nd, 2022 by

When is it important to use adjusted R-squared instead of R-squared?

R², the Coefficient of Determination, is one of the most useful and intuitive statistics we have in linear regression.Stage 2

It tells you how well the model predicts the outcome and has some nice properties. But it also has one big drawback.

(more…)


Exogenous and Endogenous Variables in Structural Equation Modeling

July 22nd, 2022 by

In most regression models, there is one response variable and one or more predictors. From the model’s point of view, it doesn’t matter if those predictors are there to predict, to moderate, to explain, or to control. All that matters is that they’re all Xs, on the right side of the equation.

(more…)


When Linear Models Don’t Fit Your Data, Now What?

June 20th, 2022 by

When your dependent variable is not continuous, unbounded, and measured on an interval or ratio scale, linear models don’t fit. The data just will not meet the assumptions of linear models. But there’s good news, other models exist for many types of dependent variables.

Today I’m going to go into more detail about 6 common types of dependent variables that are either discrete, bounded, or measured on a nominal or ordinal scale and the tests that work for them instead. Some are all of these.

(more…)


What Is Specification Error in Statistical Models?

June 8th, 2022 by

When we think about model assumptions, we tend to focus on assumptions like independence, normality, and constant variance. The other big assumption, which is harder to see or test, is that there is no specification error. The assumption of linearity is part of this, but it’s actually a bigger assumption.

What is this assumption of no specification error? (more…)


The Difference Between an Odds Ratio and a Predicted Odds

May 20th, 2022 by

When interpreting the results of a regression model, the first step is to look at the regression coefficients. Each term in the model has one. And each one describes the average difference in the value of Y for a one-unit difference in the value of the predictor variable, X, that makes up that term. It’s the effect size statistic for that term in the model. (more…)


Three Habits in Data Analysis That Feel Efficient, Yet are Not

February 21st, 2022 by

It’s easy to develop bad habits in data analysis. When you’re new to it, you just don’t have enough experience to realize that what feels like efficiency will actually come back to make things take longer, introduce problems, and lead to more frustration. (more…)