Data Analysis Practice

The Distribution of Independent Variables in Regression Models

April 9th, 2009 by

I often hear concern about the non-normal distributions of independent variables in regression models, and I am here to ease your mind.Stage 2

There are NO assumptions in any linear model about the distribution of the independent variables.  Yes, you only get meaningful parameter estimates from nominal (unordered categories) or numerical (continuous or discrete) independent variables.  But no, the model makes no assumptions about them.  They do not need to be normally distributed or continuous.

It is useful, however, to understand the distribution of predictor variables to find influential outliers or concentrated values.  A highly skewed independent variable may be made more symmetric with a transformation.

 


Respect Your Data

February 13th, 2009 by

The steps you take to analyze data are just as important as the statistics you use. Mistakes and frustration in statistical analysis come as much, if not more, from poor process than from using the wrong statistical method.

Benjamin Earnhart of the University of Iowa has written a short (and humorous) article entitled “Respect Your Data” (requires LinkedIn account) that describes 23 practical steps that data analysts must take. This article was published in the newsletter of the American Statistical Association and has since been expanded and annotated

 


Order affects Regression Parameter Estimates in SPSS GLM

February 6th, 2009 by

Stage 2I just discovered something in SPSS GLM that I never knew.

When you have an interaction in the model, the order you put terms into the Model statement affects which parameters SPSS gives you.

The default in SPSS is to automatically create interaction terms among all the categorical predictors.  But if you want fewer than all those interactions, or if you want to put in an interaction involving a continuous variable, you need to choose Model–>Custom Model.

In the specific example of an interaction between a categorical and continuous variable, to interpret this interaction you need to output Regression Coefficients. Do this by choosing  Options–>Regression Parameter Estimates.

If you put the main effects into the model first, followed by interactions, you will find the usual output–the regression coefficients (column B) for the continuous variable is the slope for the reference group.  The coefficients for the interactions in the other categories tell you the difference between the slope for that category and the slope for the reference group.  The coefficient for the reference group here in the interaction is 0.

What I was surprised to find is that if the interactions are put into the model first, you don’t get that.

Instead, the coefficients for the interaction of each category is the actual slope for that group, NOT the difference.

This is actually quite useful–it can save a bit of calculating and now you have a p-value for whether each slope is different from 0.  However, it also means you have to be cautious and make sure you realize what each parameter estimate is actually estimating.

 


The Great Likert Data Debate

January 9th, 2009 by

I first encountered the Great Likert Data Debate in 1992 in my first statistics class in my psychology graduate program.Stage 2

My stats professor was a brilliant mathematical psychologist and taught the class unlike any psychology grad class I’ve ever seen since.  Rather than learn ANOVA in SPSS, we derived the Method of Moments using Matlab.  While I didn’t understand half of what was going on, this class roused my curiosity and led me to take more theoretical statistics classes.  The rest is history.

A large section of the class was dedicated to the fact that Likert data was not interval and therefore not appropriate for  statistics that assume normality such as ANOVA and regression.  This was news to me.  Meanwhile, most of the rest of the field either ignored or debated this assertion.

16 years later, the debate continues.  A nice discussion of the debate is found on the Research Methodology blog by Hisham bin Md-Basir.  It’s a nice blog with thoughtful entries that summarize methodological articles in the social and design sciences.

To be fair, though, this blog entry summarizes an article on the “Likert scales are not interval” side of the debate.  For a balanced listing of references, see Can Likert Scale Data Ever Be Continuous?

 


A Reason to Not Drop Outliers

September 23rd, 2008 by

I recently had this question in consulting:

I’ve got 12 out of 645 cases with Mahalanobis’s Distances above the critical value, so I removed them and reran the analysis, only to find that another 10 cases were now outside the value. I removed these, and another 10 appeared, and so on until I have removed over 100 cases from my analysis! Surely this can’t be right!?! Do you know any way around this? It is really slowing down my analysis and I have no idea how to sort this out!!

And this was my response:

I wrote an article about dropping outliers.  As you’ll see, you can’t just drop outliers without a REALLY good reason.  Being influential is not in itself a good enough reason to drop data.

 


Outliers: To Drop or Not to Drop

September 17th, 2008 by

Should you drop outliers? Outliers are one of those statistical issues that everyone knows about, but most people aren’t sure how to deal with.  Most parametric statistics, like means, standard deviations, and correlations, and every statistic based on these, are highly sensitive to outliers.

And since the assumptions of common statistical procedures, like linear regression and ANOVA, are also based on these statistics, outliers can really mess up your analysis.

stage 1

Despite all this, as much as you’d like to, it is NOT acceptable to

(more…)