There are many rules of thumb in statistical analysis that make decision making and understanding results much easier.
Have you ever stopped to wonder where these rules came from, let alone if there is any scientific basis for them? Is there logic behind these rules, or is it propagation of urban legends?
In this webinar, we’ll explore and question the origins, justifications, and some of the most common rules of thumb in statistical analysis, like:
(more…)
Despite modern concerns about how to handle big data, there persists an age-old question: What can we do with small samples?
Sometimes small sample sizes are planned and expected. Sometimes not. For example, the cost, ethical, and logistical realities of animal experiments often lead to samples of fewer than 10 animals.
Other times, a solid sample size is intended based on a priori power calculations. Yet recruitment difficulties or logistical problems lead to a much smaller sample. In this webinar, we will discuss methods for analyzing small samples. Special focus will be on the case of unplanned small sample sizes and the issues and strategies to consider.
Note: This training is an exclusive benefit to members of the Statistically Speaking Membership Program and part of the Stat’s Amore Trainings Series. Each Stat’s Amore Training is approximately 90 minutes long.
(more…)
Whenever we run an analysis of variance or run a regression one of the first things we do is look at the p-value of our predictor variables to determine whether
they are statistically significant. When the variable is statistically significant, did you ever stop and ask yourself how significant it is? (more…)
If you’ve ever worked with multilevel models, you know that they are an extension of linear models. For a researcher learning them, this is both good and bad news.
The good side is that many of the concepts, calculations, and results are familiar. The down side of the extension is that everything is more complicated in multilevel models.
This includes power and sample size calculations. (more…)
If you learned much about calculating power or sample sizes in your statistics classes, chances are, it was on something very, very simple, like a z-test.
But there are many design issues that affect power in a study that go way beyond a z-test. Like:
- repeated measures
- clustering of individuals
- blocking
- including covariates in a model
Regular sample size software can accommodate some of these issues, but not all. And there is just something wonderful about finding a tool that does just what you need it to.
Especially when it’s free.
Enter Optimal Design Plus Empirical Evidence software. (more…)
Most of us run sample size calculations when a granting agency or committee requires it. That’s reason 1.
That is a very good reason. But there are others, and it can be helpful to keep these in mind when you’re tempted to skip this step or are grumbling through the calculations you’re required to do.
It’s easy to base your sample size on what is customary in your field (“I’ll use 20 subjects per condition”) or to just use the number of subjects in a similar study (“They used 150, so I will too”).
Sometimes you can get away with doing that.
However, there really are some good reasons beyond funding to do some sample size estimates. And since they’re not especially time-consuming, it’s worth doing them. (more…)