Previous Posts
Interpreting the results of logistic regression can be tricky, even for people who are familiar with performing different kinds of statistical analyses. How do we then share these results with non-researchers in a way that makes sense?
Whenever you use a multi-item scale to measure a construct, a key step is to create a score for each subject in the data set. This score is an estimate of the value of the latent construct (factor) the scale is measuring for each subject. In fact, calculating this score is the final step of […]
Ever hear this rule of thumb: “The Chi-Square test is invalid if we have fewer than 5 observations in a cell”. I frequently hear this mis-understood and incorrect “rule.” We all want rules of thumb even though we know they can be wrong, misleading, or misinterpreted. Rules of Thumb are like Urban Myths or like […]
An extremely useful area of statistics is a set of models that use latent variables: variables whole values we can’t measure directly, but instead have to infer from others. These latent variables can be unknown groups, unknown numerical values, or unknown patterns in trajectories.
Repeated measures is one of those terms in statistics that sounds like it could apply to many design situations. In fact, it describes only one. A repeated measures design is one where each subject is measured repeatedly over time, space, or condition on the dependent variable. These repeated measurements on the same subject are not […]
Data Cleaning is a critically important part of any data analysis. Without properly prepared data, the analysis will yield inaccurate results. Correcting errors later in the analysis adds to the time, effort, and cost of the project.
No, degrees of freedom is not “having one foot out the door”! Definitions are rarely very good at explaining the meaning of something. At least not in statistics. Degrees of freedom: “the number of independent values or quantities which can be assigned to a statistical distribution”. This is no exception.
The Kappa Statistic or Cohen’s* Kappa is a statistical measure of inter-rater reliability for categorical variables. In fact, it’s almost synonymous with inter-rater reliability. Kappa is used when two raters both apply a criterion based on a tool to assess whether or not some condition occurs. Examples include:
In the world of statistical analyses, there are many tests and methods that for categorical data. Many become extremely complex, especially as the number of variables increases. But sometimes we need an analysis for only one or two categorical variables at a time. When that is the case, one of these seven fundamental tests may […]
There are important ‘rules’ of statistical analysis. Like Always run descriptive statistics and graphs before running tests Use the simplest test that answers the research question and meets assumptions Always check assumptions. But there are others you may have learned in statistics classes that don’t serve you or your analysis well once you’re working with […]