Effect size statistics are all the rage these days.
Journal editors are demanding them. Committees won’t pass dissertations without them.
But the reason to compute them is not just that someone wants them — they can truly help you understand your data analysis.
What Is an Effect Size Statistic?
And yes, these definitely qualify. But the concept of an effect size statistic is actually much broader. Here’s a description from a nice article on effect size statistics:
If you think about it, many familiar statistics fit this description. Regression coefficients give information about the magnitude and direction of the relationship between two variables. So do correlation coefficients. (more…)
One of the most difficult steps in calculating sample size estimates is determining the smallest scientifically meaningful effect size.
Here’s the logic:
The power of every significance test is based on four things: the alpha level, the size of the effect, the amount of variation in the data, and the sample size.
You will measure the effect size in question differently, depending on which statistical test you’re performing. It could be a mean difference, a difference in proportions, a correlation, regression slope, odds ratio, etc.
When you’re planning a study and estimating the sample size needed for (more…)
Spoiler alert, real data are seldom normally distributed. How does the population distribution influence the estimate of the population mean and its confidence interval?
To figure this out, we randomly draw 100 observations 100 times from three distinct populations and plot the mean and corresponding 95% confidence interval of each sample.
(more…)
Lest you believe that odds ratios are merely the domain of logistic regression, I’m here to tell you it’s not true.
One of the simplest ways to calculate an odds ratio is from a cross tabulation table.
We usually analyze these tables with a categorical statistical test. There are a few options, depending on the sample size and the design, but common ones are Chi-Square test of independence or homogeneity, or a Fisher’s exact test.
(more…)
What does it mean for two variables to be correlated?
Is that the same or different than if they’re associated or related?
This is the kind of question that can feel silly, but shouldn’t. It’s just a reflection of the confusing terminology used in statistics. In this case, the technical statistical term looks like, but is not exactly the same as, the way we mean it in everyday English. (more…)
Any time you report estimates of parameters in a statistical analysis, it’s important to include their confidence intervals.
How confident are you that you can explain what they mean? Even those of us who have a solid understand of confidence intervals get tripped up by the wording.
The Wording for Describing Confidence Intervals
Let’s look at an example. (more…)