I received an e-mail from a researcher in Canada that asked about communicating logistic regression results to non-researchers. It was an important question, and there are a number of parts to it.
With the asker’s permission, I am going to address it here.
To give you the full context, she explained in a follow-up email that she is communicating to a clinical audience who will be using the results to make clinical decisions. They need to understand the size of an effect that an intervention will provide. She refers to an output I presented in my webinar on Probability, Odds, and Odds Ratios, which you can view free here.
Question:
I just went through the two lectures re: logistic regression and prob/odds/odds ratios. I completely understand (more…)
Odds ratios are one of those concepts in statistics that are just really hard to wrap your head around. Although probability and odds both measure how likely it is that something will occur, probability is just so much easier to understand for most of us.
I’m not sure if it’s just a more intuitive concepts, or if it’s something were just taught so much earlier so that it’s more ingrained. In either case, without a lot of practice, most people won’t have an immediate understanding of how likely something is if it’s communicated through odds.
So why not always use probability? (more…)
In my last blog post, I wrote about a mistake I once made when I didn’t realize the defaults for dummy coding were different in two SPSS procedures (Binary Logistic and GEE).
Ironically, about the same time I wrote it, I was having a conversation with Ann Maria de Mars on Twitter. She was trying to figure out why her logistic regression model fit results were identical in SAS Proc Logistic and SPSS Binary Logistic, but the coefficients in SAS were half those of SPSS.
It was ironic because I, of course, didn’t recognize it as the same issue and wasn’t much help.
But Ann Maria investigated and discovered that it came down to differences in the defaults for coding categorical predictors in SAS and SPSS that did it. Her detailed and humorous explanation is here.
Some takeaways for you, the researcher and data analyst:
1. Give yourself a break if you hit a snag. Even very experienced data analysts, statisticians who understand what they’re doing, get stumped sometimes. Don’t ever think that performing data analysis is an IQ test. You’re bringing together many skills and complex tools.
2. Learn thy software. In my last post, I phrased it “Know thy software”, but this is where you get to know it. Snags are good opportunities to investigate the details of your software, just like Ann Maria did. If you can think of it as a challenge to figure out–a puzzle–it can actually be fun.
Make friends with your syntax manuals.
3. Get help when you need it. Statistical software packages *are* complex tools. You don’t have to know everything to use them
Ask colleagues. Call customer support. Call a stat consultant. That’s what they’re there for.
4. A great way to check your work is to run your test two different ways. It’s another reason to be able to use at least two stat software packages. I’m not suggesting you have to run every analysis twice. But when a result looks strange, or you want to double-check a specific important model, this can be a good strategy for testing things out.
It may be that your results aren’t telling you what you think they are.
[Logistic_Regression_Workshop]