Karen Grace-Martin

5 Reasons to Run Sample Size Calculations Before Collecting Data

September 9th, 2011 by

Most of us run sample size calculations when a granting agency or committee requires it.  That’s reason 1.

That is a very good reason.  But there are others, and it can be helpful to keep these in mind when you’re tempted to skip this step or are grumbling through the calculations you’re required to do.

It’s easy to base your sample size on what is customary in your field (“I’ll use 20 subjects per condition”) or to just use the number of subjects in a similar study (“They used 150, so I will too”).

Sometimes you can get away with doing that.

However, there really are some good reasons beyond funding to do some sample size estimates. And since they’re not especially time-consuming, it’s worth doing them. (more…)


How to Combine Complicated Models with Tricky Effects

July 22nd, 2011 by

Need to dummy code in a Cox regression model?

Interpret interactions in a logistic regression?

Add a quadratic term to a multilevel model?

quadratic interaction plotThis is where statistical analysis starts to feel really hard. You’re combining two difficult issues into one.

You’re dealing with both a complicated modeling technique at Stage 3 (survival analysis, logistic regression, multilevel modeling) and tricky effects in the model (dummy coding, interactions, and quadratic terms).

The only way to figure it all out in a situation like that is to break it down into parts.  (more…)


Dummy Code Software Defaults Mess With All of Us

July 15th, 2011 by

In my last blog post, I wrote about a mistake I once made when I didn’t realize the defaults for dummy coding were different in two SPSS procedures (Binary Logistic and GEE).

Ironically, about the same time I wrote it, I was having a conversation with Ann Maria de Mars on Twitter.  She was trying to figure out why her logistic regression model fit results were identical in SAS Proc Logistic and SPSS Binary Logistic, but the coefficients in SAS were half those of SPSS.

It was ironic because I, of course, didn’t recognize it as the same issue and wasn’t much help.

But Ann Maria investigated and discovered that it came down to differences in the defaults for coding categorical predictors in SAS and SPSS that did it.  Her detailed and humorous explanation is here.

Some takeaways for you, the researcher and data analyst:

1. Give yourself a break if you hit a snag.  Even very experienced data analysts, statisticians who understand what they’re doing, get stumped sometimes.  Don’t ever think that performing data analysis is an IQ test.  You’re bringing together many skills and complex tools.

2. Learn thy software.  In my last post, I phrased it “Know thy software”, but this is where you get to know it.  Snags are good opportunities to investigate the details of your software, just like Ann Maria did.  If you can think of it as a challenge to figure out–a puzzle–it can actually be fun.

Make friends with your syntax manuals.

3. Get help when you need it. Statistical software packages *are* complex tools. You don’t have to know everything to use them

Ask colleagues.  Call customer support. Call a stat consultant.  That’s what they’re there for.

4. A great way to check your work is to run your test two different ways.  It’s another reason to be able to use at least two stat software packages.  I’m not suggesting you have to run every analysis twice.  But when a result looks strange, or you want to double-check a specific important model, this can be a good strategy for testing things out.

It may be that your results aren’t telling you what you think they are.

 

[Logistic_Regression_Workshop]


When Dummy Codes are Backwards, Your Stat Software may be Messing With You

July 8th, 2011 by

One of the tricky parts about dummy coded (0/1) variables is keeping track of what’s a 0 and what’s a 1.

This is made particularly tricky because sometimes your software switches them on you.

Here’s one example in a question I received recently.  The context was a Linear Mixed Model, but this can happen in other procedures as well.

I dummy code my categorical variables “0” or “1” but for some reason in the (more…)


7 Practical Guidelines for Accurate Statistical Model Building

June 24th, 2011 by

Stage 2Model  Building–choosing predictors–is one of those skills in statistics that is difficult to teach.   It’s hard to lay out the steps, because at each step, you have to evaluate the situation and make decisions on the next step.

If you’re running purely predictive models, and the relationships among the variables aren’t the focus, it’s much easier.  Go ahead and run a stepwise regression model.  Let the data give you the best prediction.

But if the point is to answer a research question that describes relationships, you’re going to have to get your hands dirty.

It’s easy to say “use theory” or “test your research question” but that ignores a lot of practical issues.  Like the fact that you may have 10 different variables that all measure the same theoretical construct, and it’s not clear which one to use. (more…)


Do Top Journals Require Reporting on Missing Data Techniques?

June 3rd, 2011 by

Q: Do most high impact journals require authors to state which method has been used on missing data?

I don’t usually get far enough in the publishing process to read journal requirements.

But based on my conversations with researchers who both review articles for journals and who deal with reviewers’ comments, I can offer this response.

I would be shocked if journal editors at top journals didn’t want information about the missing data technique.  If you leave it out, they’ll either assume you didn’t have missing data or are using defaults like listwise deletion. (more…)