
In Part 1 we installed R and used it to create a variable and summarize it using a few simple commands. Today let’s re-create that variable and also create a second variable, and see what we can do with them.
As before, we take height to be a variable that describes the heights (in cm) of ten people. Type the following code to the R command line to create this variable.
height = c(176, 154, 138, 196, 132, 176, 181, 169, 150, 175)
Now let’s take bodymass to be a variable that describes the weight (in kg) of the same ten people. Copy and paste the following code to the R command line to create the bodymass variable.
bodymass = c(82, 49, 53, 112, 47, 69, 77, 71, 62, 78)
Both variables are now stored in the R workspace. To view them, enter:
height bodymass
We can now create a simple plot of the two variables as follows:
plot(bodymass, height)
However, this is a rather simple plot and we can embellish it a little. Type the following code into the R workspace:
plot(bodymass, height, pch = 16, cex = 1.3, col = "red", main = "MY FIRST PLOT USING R", xlab = "Body Mass (kg)", ylab = "HEIGHT (cm)")
[Note: R is very picky about the quotation marks you use. If the font that is displaying this post shows the beginning and ending quotation marks as facing in different directions, it won’t work in R. They both have to look the same–just straight lines. You may have to retype them within R rather than cutting and pasting.]
In the above code, the syntax pch = 16
creates solid dots, while cex = 1.3
creates dots that are 1.3 times bigger than the default (where cex = 1
). More about these commands later.
Now let’s perform a linear regression on the two variables by adding the following text at the command line:
lm(height~bodymass)
We see that the intercept is 98.0054 and the slope is 0.9528. By the way – lm stands for “linear model”.
Finally, we can add a best fit line to our plot by adding the following text at the command line:
abline(98.0054, 0.9528)
None of this was so difficult!
In Part 3 we will look again at regression and create more sophisticated plots.
About the Author: David Lillis has taught R to many researchers and statisticians. His company, Sigma Statistics and Research Limited, provides both on-line instruction and face-to-face workshops on R, and coding services in R. David holds a doctorate in applied statistics.
See our full R Tutorial Series and other blog posts regarding R programming.
Two methods for dealing with missing data, vast improvements over traditional approaches, have become available in mainstream statistical software in the last few years.
Both of the methods discussed here require that the data are missing at random–not related to the missing values. If this assumption holds, resulting estimates (i.e., regression coefficients and standard errors) will be unbiased with no loss of power.
The first method is Multiple Imputation (MI). Just like the old-fashioned imputation (more…)

Many of you have heard of R (the R statistics language and environment for scientific and statistical computing and graphics). Perhaps you know that it uses command line input rather than pull-down menus. Perhaps you feel that this makes R hard to use and somewhat intimidating!
OK. Indeed, R has a longer learning curve than other systems, but don’t let that put you off! Once you master the syntax, you have control of an immensely powerful statistical tool.
Actually, much of the syntax is not all that difficult. Don’t believe me? To prove it, let’s look at some syntax for providing summary statistics on a continuous variable. (more…)
Before you run a Cronbach’s alpha or factor analysis on scale items, it’s generally a good idea to reverse code items that are negatively worded so that a high value indicates the same type of response on every item.
So for example let’s say you have 20 items each on a 1 to 7 scale. For most items, a 7 may indicate a positive attitude toward some issue, but for a few items, a 1 indicates a positive attitude.
I want to show you a very quick and easy way to reverse code them using a single command line. This works in any software. (more…)
It seems every editor and her brother these days wants to see standardized effect size statistics reported in journal articles.
For ANOVAs, two of the most popular are Eta-squared and partial Eta-squared. In one way ANOVAs, they come out the same, but in more complicated models, their values, and their meanings differ.
SPSS only reports partial Eta-squared, and in earlier versions of the software it was (unfortunately) labeled Eta-squared. More recent versions have fixed the label, but still don’t offer Eta-squared as an option.
Luckily Eta-squared is very simple to calculate yourself based on the sums of squares in your ANOVA table. I’ve written another blog post with all the formulas. You can (more…)
In my last blog post, I wrote about a mistake I once made when I didn’t realize the defaults for dummy coding were different in two SPSS procedures (Binary Logistic and GEE).
Ironically, about the same time I wrote it, I was having a conversation with Ann Maria de Mars on Twitter. She was trying to figure out why her logistic regression model fit results were identical in SAS Proc Logistic and SPSS Binary Logistic, but the coefficients in SAS were half those of SPSS.
It was ironic because I, of course, didn’t recognize it as the same issue and wasn’t much help.
But Ann Maria investigated and discovered that it came down to differences in the defaults for coding categorical predictors in SAS and SPSS that did it. Her detailed and humorous explanation is here.
Some takeaways for you, the researcher and data analyst:
1. Give yourself a break if you hit a snag. Even very experienced data analysts, statisticians who understand what they’re doing, get stumped sometimes. Don’t ever think that performing data analysis is an IQ test. You’re bringing together many skills and complex tools.
2. Learn thy software. In my last post, I phrased it “Know thy software”, but this is where you get to know it. Snags are good opportunities to investigate the details of your software, just like Ann Maria did. If you can think of it as a challenge to figure out–a puzzle–it can actually be fun.
Make friends with your syntax manuals.
3. Get help when you need it. Statistical software packages *are* complex tools. You don’t have to know everything to use them
Ask colleagues. Call customer support. Call a stat consultant. That’s what they’re there for.
4. A great way to check your work is to run your test two different ways. It’s another reason to be able to use at least two stat software packages. I’m not suggesting you have to run every analysis twice. But when a result looks strange, or you want to double-check a specific important model, this can be a good strategy for testing things out.
It may be that your results aren’t telling you what you think they are.
[Logistic_Regression_Workshop]