Many of you have heard of R (the R statistics language and environment for scientific and statistical computing and graphics). Perhaps you know that it uses command line input rather than pull-down menus. Perhaps you feel that this makes R hard to use and somewhat intimidating!
OK. Indeed, R has a longer learning curve than other systems, but don’t let that put you off! Once you master the syntax, you have control of an immensely powerful statistical tool.
Actually, much of the syntax is not all that difficult. Don’t believe me? To prove it, let’s look at some syntax for providing summary statistics on a continuous variable. (more…)
Before you run a Cronbach’s alpha or factor analysis on scale items, it’s generally a good idea to reverse code items that are negatively worded so that a high value indicates the same type of response on every item.
So for example let’s say you have 20 items each on a 1 to 7 scale. For most items, a 7 may indicate a positive attitude toward some issue, but for a few items, a 1 indicates a positive attitude.
I want to show you a very quick and easy way to reverse code them using a single command line. This works in any software. (more…)
It seems every editor and her brother these days wants to see standardized effect size statistics reported in journal articles.
For ANOVAs, two of the most popular are Eta-squared and partial Eta-squared. In one way ANOVAs, they come out the same, but in more complicated models, their values, and their meanings differ.
SPSS only reports partial Eta-squared, and in earlier versions of the software it was (unfortunately) labeled Eta-squared. More recent versions have fixed the label, but still don’t offer Eta-squared as an option.
Luckily Eta-squared is very simple to calculate yourself based on the sums of squares in your ANOVA table. I’ve written another blog post with all the formulas. You can (more…)
In my last blog post, I wrote about a mistake I once made when I didn’t realize the defaults for dummy coding were different in two SPSS procedures (Binary Logistic and GEE).
Ironically, about the same time I wrote it, I was having a conversation with Ann Maria de Mars on Twitter. She was trying to figure out why her logistic regression model fit results were identical in SAS Proc Logistic and SPSS Binary Logistic, but the coefficients in SAS were half those of SPSS.
It was ironic because I, of course, didn’t recognize it as the same issue and wasn’t much help.
But Ann Maria investigated and discovered that it came down to differences in the defaults for coding categorical predictors in SAS and SPSS that did it. Her detailed and humorous explanation is here.
Some takeaways for you, the researcher and data analyst:
1. Give yourself a break if you hit a snag. Even very experienced data analysts, statisticians who understand what they’re doing, get stumped sometimes. Don’t ever think that performing data analysis is an IQ test. You’re bringing together many skills and complex tools.
2. Learn thy software. In my last post, I phrased it “Know thy software”, but this is where you get to know it. Snags are good opportunities to investigate the details of your software, just like Ann Maria did. If you can think of it as a challenge to figure out–a puzzle–it can actually be fun.
Make friends with your syntax manuals.
3. Get help when you need it. Statistical software packages *are* complex tools. You don’t have to know everything to use them
Ask colleagues. Call customer support. Call a stat consultant. That’s what they’re there for.
4. A great way to check your work is to run your test two different ways. It’s another reason to be able to use at least two stat software packages. I’m not suggesting you have to run every analysis twice. But when a result looks strange, or you want to double-check a specific important model, this can be a good strategy for testing things out.
It may be that your results aren’t telling you what you think they are.
[Logistic_Regression_Workshop]
One of the tricky parts about dummy coded (0/1) variables is keeping track of what’s a 0 and what’s a 1.
This is made particularly tricky because sometimes your software switches them on you.
Here’s one example in a question I received recently. The context was a Linear Mixed Model, but this can happen in other procedures as well.
I dummy code my categorical variables “0” or “1” but for some reason in the (more…)
Here’s a little SPSS tip.
When you create new variables, whether it’s through the Recode, Compute, or some other command, you need to check that it worked the way you think it did.
(As an aside, I hope this goes without saying, but never, never, never, never use Recode into Same Variable. Always Recode into Different Variable so you don’t overwrite your data and then discover you made a mistake. Or worse, not discover. It happens).
And the easiest way to do that is to simply look at the data. (more…)