The Analysis Factor Statwise Newsletter
Volume 1, Issue 1
In This Issue

A Note from Karen

Featured Article: A Comparison of Effect Size Statistics

Resource of the Month

What's New

About Us

 
Quick Links

Our Website

More About Us

You received this email because you subscribed to The Analysis Factor's list community. To change your subscription, see the link at end of this email. If your email is having trouble with the format, click here for a web version.

Please forward this to anyone you know who might benefit. If you received this from a friend, sign up for this email newsletter now!
A Note from Karen

Karen Grace-MartinDear %$firstname$%,

Happy New Year! I hope you had a relaxing and peaceful new year. I always find myself feeling both gratitude and anticipation at this time of year.

We're still planning out all of our offerings for the year ahead, so you'll be hearing more soon. But one thing we managed to finish is to create a self-study versions of one of our workshops. We've had many requests for self-study versions of our workshops--it's inevitible that our workshop timing doesn't work out for everyone.

So you can now purchase a self-study version of Assumptions of Linear Models. And to celebrate, I'm running a complimentary Question & Answer webinar on January 20th, noon eastern time. If you want to come to the session, you should purchase it in the next few days, so you have time to work through the material. If you'd rather have one-on-one help, one option we're including is to get a Quick Question Consultation at a discounted rate along with the workshop.

We will create similar self-study versions for some more of our workshops over the next few months. Two more are nearly done. We're also planning some ebook versions for those of you who prefer to read, and don't need all the exercises.

We're also planning a new workshop on Mixed Models for Repeated Measures Data and some big updates to our Craft of Statistical Analysis Webinars. Lots going on this year. In the meantime, I hope you enjoy this article on the various effect size measures in linear models. It's a question that comes up quite often.

Happy analyzing,
Karen

Featured Article: 5 Steps for Calculating Sample Size

If you're in a field that uses Analysis of Variance, you have surely heard that p-values alone don't indicate the size of an effect. You also need to give some sort of effect size measure.

Why? Because with a big enough sample size, any difference in means, no matter how small, can be statistically significant. P-values are designed to tell you if your result is a fluke, not if it's big.

Truly the simplest and most straightforward effect size measure is the difference between two means.  And you're probably already reporting that.  But the limitation of this measure as an effect size is not inaccuracy.  It's just hard to evaluate.

If you're familiar with an area of research and the variables used in that area, you should know if a 3-point difference is big or small, although your readers may not.  And if you're evaluating a new type of variable, it can be hard to tell.

Standardized effect sizes are designed for easier evaluation.  They remove the units of measurement, so you don't have to be familiar with the scaling of the variables. 

Cohen's d is a good example of a standardized effect size measurement.  It's equivalent in many ways to a standardized regression coefficient (labeled beta in some software).  Both are standardized measures-they divide the size of the effect by the relevant standard deviations.  So instead of being in terms of the original units of X and Y, both Cohen's d and standardized regression coefficients are in terms of standard deviations.

There are some nice properties of standardized effect size measures.  The foremost is you can compare them across variables.  And in many situations, seeing differences in terms of number of standard deviations is very helpful. 

But they're most useful if you can also recognize their limitations.  Unlike correlation coefficients, both Cohen's d and beta can be greater than one.  So while you can compare them to each other, you can't just look at one and tell right away what is big or small.  You're just looking at the effect of the independent variable in terms of standard deviations.

This is especially important to note for Cohen's d, because in his original book, he specified certain d values as indicating small, medium, and large effects in behavioral research.  While the statistic itself is a good one, you should take these size recommendations with a grain of salt (or maybe a very large bowl of salt).  What is a large or small effect is highly dependent on your specific field of study, and even a small effect can be theoretically meaningful.

Another set of effect size measures for categorical independent variables have a more intuitive interpretation, and are easier to evaluate. They include Eta Squared, Partial Eta Squared, and Omega Squared. Like the R Squared statistic, they all have the intuitive interpretation of the proportion of the variance accounted for.

Eta Squared is calculated the same way as R Squared, and has the most equivalent interpretation: out of the total variation in Y, the proportion that can be attributed to a specific X.

Eta Squared, however, is used specifically in ANOVA models.  Each categorical effect in the model has its own Eta Squared, so you get a specific, intuitive measure of the effect of that variable.

Eta Squared has two drawbacks, however.  One is that as you add more variables to the model, the proportion explained by any one variable will automatically decrease.  This makes it hard to compare the effect of a single variable in different studies.

Partial Eta Squared solves this problem, but has a less intuitive interpretation. There, the denominator is not the total variation in Y, but the unexplained variation in Y plus the variation explained just by that X. So any variation explained by other Xs is removed from the denominator. This allows a researcher to compare the effect of the same variable in two different studies, which contain different covariates or other factors.

In a one-way ANOVA, Eta Squared and Partial Eta Squared will be equal, but this isn't true in models with more than one independent variable.

The drawback for Eta Squared is that it is a biased measure of population variance explained (although it is accurate for the sample).  It always overestimates it.

This bias gets very small as sample size increases, but for small samples an unbiased effect size measure is Omega SquaredOmega Squared has the same basic interpretation, but uses unbiased measures of the variance components.  Because it is an unbiased estimate of population variances, Omega Squared is always smaller than Eta Squared.

For the equations of all these effect size measures, a list of great references for further reading, or to ask questions or comment about this article, see our blog.

Resource of the Month

Quick Question Consulting

Many times all you need is a single consultation to get unstuck.

Maybe you're struggling with a statistical issue and you've tried Google, the library, and gotten conflicting advice from colleagues. Now you just want a real answer from someone who sees the big picture and can explain to you what it means and what to do next. So you can move on with your project.

That's why we offer Quick Question Consultations. It's maximal solutions for a minimal time and financial investment.

For all the details and to sign up, click here.

What's New

The next Craft of Statistical Analysis Webinar:

Random Intercept and Random Slope Models

This webinar will outline and demonstrate one of the core concepts of mixed modeling—the random intercept and the random slope. You’ll learn what they mean, what they do, and how to decide if one or both are needed. It’s the first step in understanding mixed modeling.

Get more information and register here.

Self-Study Workshop Now Available:

Assumptions of Linear Models

If you missed this workshop in the fall, you have another chance. Assumptions of Linear Models Workshop is now available in a self-study version.

And to celebrate, we're holding a Question & Answer webinar session for anyone who purchases by this Friday, January 14th.

Yes, it's close, but the webinar is Thursday, January 20th at noon, and we want to make sure you have time to go through the material and exercises before the session.

Get more information and register here.

About Us

What is The Analysis Factor? The Analysis Factor is the difference between knowing about statistics and knowing how to use statistics in data analysis. It acknowledges that statistical analysis is an applied skill. It requires learning how to use statistical tools within the context of a researcher’s own data, and supports that learning.

The Analysis Factor, the organization, offers statistical consulting, resources, and learning programs that empower researchers to become confident, able, and skilled statistical practitioners. Our aim is to make your journey acquiring the applied skills of statistical analysis easier and more pleasant.

Karen Grace-Martin, the founder, spent seven years as a statistical consultant at Cornell University. While there, she learned that being a great statistical advisor is not only about having excellent statistical skills, but about understanding the pressures and issues researchers face, about fabulous customer service, and about communicating technical ideas at a level each client understands. 

You can learn more about Karen Grace-Martin and The Analysis Factor at analysisfactor.com.

Please forward this newsletter to colleagues who you think would find it useful. Your recommendation is how we grow.

If you received this email from a friend or colleague, click here to subscribe to this newsletter.

Need to change your email address? See below for details.

No longer wish to receive this newsletter? See below to cancel.