When learning about linear models —that is, regression, ANOVA, and similar techniques—we are taught to calculate an R2. The R2 has the following useful properties:
- The range is limited to [0,1], so we can easily judge how relatively large it is.
- It is standardized, meaning its value does not depend on the scale of the variables involved in the analysis.
- The interpretation is pretty clear: It is the proportion of variability in the outcome that can be explained by the independent variables in the model.
The calculation of the R2 is also intuitive, once you understand the concepts of variance and prediction. (more…)
What are the best methods for checking a generalized linear mixed model (GLMM) for proper fit?
This question comes up frequently.
Unfortunately, it isn’t as straightforward as it is for a general linear model.
In linear models the requirements are easy to outline: linear in the parameters, normally distributed and independent residuals, and homogeneity of variance (that is, similar variance at all values of all predictors).
(more…)
In fixed-effects models (e.g., regression, ANOVA, generalized linear models), there is only one source of random variability. This source of variance is the random sample we take to measure our variables.
It may be patients in a health facility, for whom we take various measures of their medical history to estimate their probability of recovery. Or random variability may come from individual students in a school system, and we use demographic information to predict their grade point averages.
(more…)
If you are new to using generalized linear mixed effects models, or if you have heard of them but never used them, you might be wondering about the purpose of a GLMM.
Mixed effects models are useful when we have data with more than one source of random variability. For example, an outcome may be measured more than once on the same person (repeated measures taken over time).
When we do that we have to account for both within-person and across-person variability. A single measure of residual variance can’t account for both.
(more…)
Generalized linear mixed models (GLMMs) are incredibly useful tools for working with complex, multi-layered data. But they can be tough to master.
In this follow-up to October’s webinar (“A Gentle Introduction to Generalized Linear Mixed Models – Part 1”), we’ll cover important topics like:
– Distinction between crossed and nested grouping factors
– Software choices for implementation of GLMMs (more…)
Generalized linear mixed models (GLMMs) are incredibly useful—but they’re also a hard nut to crack.
As an extension of generalized linear models, GLMMs include both fixed and random effects. They are particularly useful when an outcome variable and a set of predictor variables are measured repeatedly over time and the outcome variable is a binary, nominal, ordinal or count variable. These models accommodate nesting of subjects in higher level units such as schools, hospitals, etc., and can also incorporate predictor variables collected at these higher levels.
In this webinar, we’ll provide a gentle introduction to GLMMs, discussing issues like: (more…)