There are not a lot of statistical methods designed just for ordinal variables.
But that doesn’t mean that you’re stuck with few options. There are more than you’d think.
Some are better than others, but it depends on your specific variables, your research questions, and how you’re using these variables.
We can’t cover them all here, but I wanted to start you with two simple options that sometimes work. How well each one works depends on the exact variable you’re using, the research question, the design, and the assumptions it’s reasonable to make. (That last one is a big one).
Treating ordinal variables as nominal
One option that makes no assumptions is to ignore the ordering of the categories and treat the variable as nominal.
Any analysis that works on nominal variables works on ordinal ones as well: chi-square tests, phi coefficients, multinomial logistic regressions, loglinear models, etc.
This works both when you are using the ordinal variable as an independent or dependent variable.
While this is never wrong in that it’s not making unreasonable assumptions, you are losing the information in the ordering.
The downside: depending on the effect of the ordering, you could fail to answer your research question if the ordering is part of it.
The upside: the effect of the ordering may not be all that big or all that important, and you can be sure that you’re not overstating any effects. If anything, this approach is conservative.
Treating ordinal variables as numeric
That downside is a big one. Because they’re worried about losing the information in the ordering, many data analysts go to the other end: ignore the fact that the ordinal variable really isn’t numeric and treat the numerals that designate each category as actual numbers.
They’re essentially presuming there is more information contained in their ordinal variable than is really there.
The upsides:
1. This gives you a lot of flexibility in your choice of analysis and preserves the information in the ordering.
2. More importantly to many analysts, it allows you to analyze the data using techniques that your audience is familiar with and easily understands. The argument is that even if results are approximations, they’re understandable approximations.
The downside: This approach requires the assumption that the numerical distance between each set of subsequent categories is equal.
If that assumption is very close to reality–the distances are approximately equal–then analyses based on these numbers will render results that are very close to reality. This assumption is sometimes very close and sometimes so far away. It’s unwise to assume it’s reasonable without some consideration.
Other Options
The good news is these aren’t the only options–there are analyses that take the ordering into account without making assumptions of numerocity. These include nonparametric statistics, ordinal logistic and probit models, and rank transformations.
mitra says
hi! thanks for your great insights, I have one question still in my mind,
I’m working on Boston house pricing competition in Kaggle. the dataset has some ordinal features like the BsmtCond field that gives the quality of the basement from poor to excellent. my question is should I log (or box cox) transform these features after using ordinalencoder on them? (if they were skewed)
Bharat Ram Ammu says
Hi Saad, you probably don’t need this answer now but I am gonna try to answer just to learn and discuss. I think it helps to treat the total possible score as continous because it preserve the info on ordering in an analysis. The only consideration is does it make sense to assume that the total scores of 20 and 21 differ the same in quantity, as the total scores of 21 and 22 and so on? In my opinion, it is okay to assume and makes sense. Hope it helps.
Bharat,
Ms in Stats
Saad says
Hi Karen, really appreciate these amazing resources. They are so helpful. I have one question on this. I am measuring whether 5 environmental factors (such as regulation, competition) drive a company to formalise its processes. These are measured on a likert scale of 1 to 7 (1 equals low impact and 7 equals high impact). As I have 5 such environmental factors, I intend to add the scores from each of the five factors taken from the likert scale together to produce a single Environmental Impact Score. Therefore the total possible score would be 35 (5 factors x max likert scale of 7). A score of say 27 would indicate high enviromental impact and 5 being low or no impact.
Can I treat the scale as numerical or discrete continuous? My assumptions are: a) the max score is not in fact 7 on the likert scale, it’s 35, thus offering greater continuity; b) the ordering of the scale doesn’t make much difference (or does it?), as the numerical value can tell me whether there is a strong impact from the environment or not.
Sorry for the lengthy question! Thanks so much
Saad