ANOVA

Member Training: Power Analysis and Sample Size Determination Using Simulation

July 30th, 2018 by
This webinar will show you strategies and steps for using simulations to estimate sample size and power. You will learn:
  • A review of basic concepts of statistical power and effect size
  • A simulation-based approach to power analysis
  • An overview of how to implement simulations in various popular software programs.

The Problem with Using Tests for Statistical Assumptions

July 16th, 2018 by

Every statistical model and hypothesis test has assumptions.

And yes, if you’re going to use a statistical test, you need to check whether those assumptions are reasonable to whatever extent you can.

Some assumptions are easier to check than others. Some are so obviously reasonable that you don’t need to do much to check them most of the time. And some have no good way of being checked directly, so you have to use situational clues.

(more…)


Why ANOVA is Really a Linear Regression, Despite the Difference in Notation

April 23rd, 2018 by

When I was in graduate school, stat professors would say “ANOVA is just a special case of linear regression.”  But they never explained why.Stage 2

And I couldn’t figure it out.

The model notation is different.

The output looks different.

The vocabulary is different.

The focus of what we’re testing is completely different. How can they be the same model?

(more…)


Six Differences Between Repeated Measures ANOVA and Linear Mixed Models

January 22nd, 2018 by

As mixed models are becoming more widespread, there is a lot of confusion about when to use these more flexible but complicated models and when to use the much simpler and easier-to-understand repeated measures ANOVA.

One thing that makes the decision harder is sometimes the results are exactly the same from the two models and sometimes the results are (more…)


Member Training: The Multi-Faceted World of Residuals

July 1st, 2017 by

Most analysts’ primary focus is to check the distributional assumptions with regards to residuals. They must be independent and identically distributed (i.i.d.) with a mean of zero and constant variance.

Residuals can also give us insight into the quality of our models.

In this webinar, we’ll review and compare what residuals are in linear regression, ANOVA, and generalized linear models. Jeff will cover:

  • Which residuals — standardized, studentized, Pearson, deviance, etc. — we use and why
  • How to determine if distributional assumptions have been met
  • How to use graphs to discover issues like non-linearity, omitted variables, and heteroskedasticity

Knowing how to piece this information together will improve your statistical modeling skills.


Note: This training is an exclusive benefit to members of the Statistically Speaking Membership Program and part of the Stat’s Amore Trainings Series. Each Stat’s Amore Training is approximately 90 minutes long.

(more…)


Linear Mixed Models for Missing Data in Pre-Post Studies

August 30th, 2016 by

In the past few months, I’ve gotten the same question from a few clients about using linear mixed models for repeated measures data.  They want to take advantage of its ability to give unbiased results in the presence of missing data.  In each case the study has two groups complete a pre-test and a post-test measure.  Both of these have a lot of missing data.

The research question is whether the groups have different improvements in the dependent variable from pre to post test.

As a typical example, say you have a study with 160 participants.

90 of them completed both the pre and the post test.

Another 48 completed only the pretest and 22 completed only the post-test.

Repeated Measures ANOVA will deal with the missing data through listwise deletion. That means keeping only the 90 people with complete data.  This causes problems with both power and bias, but bias is the bigger issue.

Another alternative is to use a Linear Mixed Model, which will use the full data set.  This is an advantage, but it’s not as big of an advantage in this design as in other studies.

The mixed model will retain the 70 people who have data for only one time point.  It will use the 48 people with pretest-only data along with the 90 people with full data to estimate the pretest mean.

Likewise, it will use the 22 people with posttest-only data along with the 90 people with full data to estimate the post-test mean.

If the data are missing at random, this will give you unbiased estimates of each of these means.

But most of the time in Pre-Post studies, the interest is in the change from pre to post across groups.

The difference in means from pre to post will be calculated based on the estimates at each time point.  But the degrees of freedom for the difference will be based only on the number of subjects who have data at both time points.

So with only two time points, if the people with one time point are no different from those with full data (creating no bias), you’re not gaining anything by keeping those 72 people in the analysis.

Compare this to a study I also saw in consulting with 5 time points.  Nearly all the participants had 4 out of the 5 observations.  The missing data was pretty random–some participants missed time 1, others, time 4, etc.  Only 6 people out of 150 had full data.  Listwise deletion created a nightmare, leaving only 6 people in the data set.

Each person contributed data to 4 means, so each mean had a pretty reasonable sample size.  Since the missingness was random, each mean was unbiased.  Each subject fully contributed data and df to many of the mean comparisons.

With more than 2 time points and data that are missing at random, each subject can contribute to some change measurements.  Keep that in mind the next time you design a study.