ANOVA

Confusing Statistical Term #3: Level

January 21st, 2025 by

Level is a statistical term that is confusing because it has multiple meanings in different contexts (much like alpha and beta).

There are three different uses of the term Level in statistics that mean completely different things. What makes this especially confusing is that all three of them can be used in the exact same analysis context.

I’ll show you an example of that at the end.

So when you’re talking to someone who is learning statistics or who happens to be thinking of that term in a different context, this gets especially confusing.

Levels of Measurement

variable measured at the ordinal levelThe most widespread of these is levels of measurement. Stanley Stevens came up with this taxonomy of assigning numerals to variables in the 1940s. You probably learned about them in your Intro Stats course: the nominal, ordinal, interval, and ratio levels.

Levels of measurement is really a measurement concept, not a statistical one. It refers to how much and the type of information a variable contains. Does it indicate an unordered category, a quantity with a zero point, etc?

So if you hear the following phrases, you’ll know that we’re using the term level to mean measurement level:

  • nominal level
  • ordinal level
  • interval level
  • ratio level

It is important in statistics because it has a big impact on which statistics are appropriate for any given variable. For example, you would not do the same test of association between two variables measured at a nominal level as you would between two variables measured at an interval level.

That said, levels of measurement aren’t the only information you need about a variable’s measurement. There is, of course, a lot more nuance.

Levels of a Factor

Another common usage of the term level is within experimental design and analysis. And this is for the levels of a factor. Although Factor itself has multiple meanings in statistics, here we are talking about a categorical independent variable.

In experimental design, the predictor variables (also often called Independent Variables) are generally categorical and nominal. They represent different experimental conditions, like treatment and control conditions.

Each of these categorical conditions is called a level.

Here are a few examples:

  • In an agricultural study, a fertilizer treatment variable has three levels: Organic fertilizer (composted manure); High concentration of chemical fertilizer; low concentration of chemical fertilizer.So you’ll hear things like: “we compared the high concentration level to the control level.”
  • In a medical study, a drug treatment has three levels: Placebo; standard drug for this disease; new drug for this disease.
  • In a linguistics study, a word frequency variable has two levels: high frequency words; low frequency words.

Now, you may have noticed that some of these examples actually indicate a high or low level of something. I’m pretty sure that’s where this word usage came from. But you’ll see it used for all sorts of variables, even when they’re not high or low.

Although this use of level is very widespread, I try to avoid it personally. Instead I use the word “value” or “category” both of which are accurate, but without other meanings. That said, “level” is pretty entrenched in this context.

Level in Multilevel Models or Multilevel Data

A completely different use of the term is in the context of multilevel models. Multilevel models is a Three level multilevel dataterm for some mixed models. (The terms multilevel models and mixed models are often used interchangably, though mixed model is a bit more flexible).

Multilevel models are used for multilevel (also called hierarchical or nested) data, which is where they get their name. The idea is that the units we’ve sampled from the population aren’t independent of each other. They’re clustered in such a way that their responses will be more similar to each other within a cluster.

The models themselves have two or more sources of random variation.  A two level model has two sources of random variation and can have predictors at each level.

A common example is a model from a design where the response variable of interest is measured on students. It’s hard though, to sample students directly or to randomly assign them to treatments, since there is a natural clustering of students within schools.

So the resource-efficient way to do this research is to sample students within schools.

Predictors can be measured at the student level (eg. gender, SES, age) or the school level (enrollment, % who go on to college).  The dependent variable has variation from student to student (level 1) and from school to school (level 2).

We always count these levels from the bottom up. So if we have students clustered within classroom and classroom clustered within school and school clustered within district, we have:

  • Level 1: Students
  • Level 2: Classroom
  • Level 3: School
  • Level 4: District

So this use of the term level describes the design of the study, not the measurement of the variables or the categories of the factors.

Putting them together

So this is the truly unfortunate part. There are situations where all three definitions of level are relevant within the same statistical analysis context.

I find this unfortunate because I think using the same word to mean completely different things just confuses people. But here it is:

Picture that study in which students are clustered within school (a two-level design). Each school is assigned to use one of three math curricula (the independent variable, which happens to be categorical).

So, the variable “math curriculum” is a factor with three levels (ie, three categories).

Because those three categories of “math curriculum” are unordered, “math curriculum” has a nominal level of measurement.

And since “math curriculum” is assigned to each school, it is considered a level 2 variable in the two-level model.

See the rest of the Confusing Statistical Terms series.

 

First published December 12, 2008

Last Updated January 21, 2025


The Steps for Running any Statistical Model

September 10th, 2024 by

No matter what statistical model you’re running, you need to go through the same steps.  The order and the specifics of how you do each step will differ depending on the data and the type of model you use.

These steps are in 4 phases.  Most people think of only the third as modeling.  But the phases before this one are fundamental to making the modeling go well. It will be much, much easier, more accurate, and more efficient if you don’t skip them.

And there is no point in running the model if you skip phase 4.

If you think of them all as part of the analysis, the modeling process will be faster, easier, and make more sense.

Phase 1: Define and Design

In the first 5 steps of running the model, the object is clarity. You want to make everything as clear as possible to yourself. The more clear things are at this point, the smoother everything will be. (more…)


Six Common Types of Statistical Contrasts

September 18th, 2023 by

When you learned analysis of variance (ANOVA), it’s likely that the emphasis was on the ANOVA table, with its Sums of Squares and F tests, followed by a post-hoc test. But ANOVA is quite flexible in how it can compare means. A large part of that flexibility comes from its ability to perform many types of statistical contrast.

That F test  can tell you if there is evidence your categories are different from each other, which is a start. It is, however, only a start. Once you know at least some categories’ means are different, your next question is “How are they different?” This is what a statistical contrast can tell you.

What is a Statistical Contrast?

A statistical contrast is a comparison of a combination of the means of two or more categories. In practice, they are usually performed as a follow up to the ANOVA F test. Most statistical programs include contrasts as an optional part of ANOVA analysis. (more…)


Member Training: The Link Between ANOVA and Regression

January 31st, 2023 by

Stage 2If you’ve used much analysis of variance (ANOVA), you’ve probably heard that ANOVA is a special case of linear regression. Unless you’ve seen why, though, that may not make a lot of sense. After all, ANOVA compares means between categories, while regression predicts outcomes with numeric variables.ANOVA chart (more…)


Can Likert Scale Data ever be Continuous?

January 19th, 2023 by

A very common question is whether it is legitimate to use Likert scale data in parametric statistical procedures that require interval data, such as Linear Regression, ANOVA, and Factor Analysis.

A typical Likert scale item has 5 to 11 points that indicate the degree of something. For example, it could measure agreement with a statement, such as 1=Strongly Disagree to 5=Strongly Agree. It can be a 1 to 5 scale, 0 to 10, etc. (more…)


What is a Dunnett’s Test?

January 10th, 2023 by

I’m a big fan of Analysis of Variance (ANOVA). I use it all the time. I learn a lot from it. But sometimes it doesn’t test the hypothesis I need. In this article, we’ll explore a test that is used when you care about a specific comparison among means: Dunnett’s test. (more…)