Have you ever experienced befuddlement when you dust off a data analysis that you ran six months ago?
Ever gritted your teeth when your collaborator invalidates all your hard work by telling you that the data set you were working on had “a few minor changes”?
Or panicked when someone running a big meta-analysis asks you to share your data?
If any of these experiences rings true to you, then you need to adopt the philosophy of reproducible research.
(more…)
A few years back the winning t-shirt design in a contest for the American Association of Public Opinion Research read “Weighting is the Hardest Part.” And I don’t think the t-shirt was referring to anything about patience!
Most statistical methods assume that every individual in the sample has the same chance of selection.
Complex Sample Surveys are different. They use multistage sampling designs that include stratification and cluster sampling. As a result, the assumption that every selected unit has the same chance of selection is not true.
To get statistical estimates that accurately reflect the population, cases in these samples need to be weighted. If not, all statistical estimates and their standard errors will be biased.
But selection probabilities are only part of weighting. (more…)
In your typical statistical work, chances are you have already used quantiles such as the median, 25th or 75th percentiles as descriptive statistics.
But did you know quantiles are also valuable in regression, where they can answer a broader set of research questions than standard linear regression?
In standard linear regression, the focus is on estimating the mean of a response variable given a set of predictor variables.
In quantile regression, we can go beyond the mean of the response variable. Instead we can understand how predictor variables predict (1) the entire distribution of the response variable or (2) one or more relevant features (e.g., center, spread, shape) of this distribution.
For example, quantile regression can help us understand not only how age predicts the mean or median income, but also how age predicts the 75th or 25th percentile of the income distribution.
Or we can see how the inter-quartile range — the width between the 75th and 25th percentile — is affected by age. Perhaps the range becomes wider as age increases, signaling that an increase in age is associated with an increase in income variability.
In this webinar, we will help you become familiar with the power and versatility of quantile regression by discussing topics such as:
- Quantiles – a brief review of their computation, interpretation and uses;
- Distinction between conditional and unconditional quantiles;
- Formulation and estimation of conditional quantile regression models;
- Interpretation of results produced by conditional quantile regression models;
- Graphical displays for visualizing the results of conditional quantile regression models;
- Inference and prediction for conditional quantile regression models;
- Software options for fitting quantile regression models.
Join us on this webinar to understand how quantile regression can be used to expand the scope of research questions you can address with your data.
Note: This training is an exclusive benefit to members of the Statistically Speaking Membership Program and part of the Stat’s Amore Trainings Series. Each Stat’s Amore Training is approximately 90 minutes long.
(more…)
Many who work with statistics are already functionally familiar with the normal distribution, and maybe even the binomial distribution.
These common distributions are helpful in many applications, but what happens when they just don’t work?
This webinar will cover a number of statistical distributions, including the:
- Poisson and negative binomial distributions (especially useful for count data)
- Multinomial distribution (for responses with more than two categories)
- Beta distribution (for continuous percentages)
- Gamma distribution (for right-skewed continuous data)
- Bernoulli and binomial distributions (for probabilities and proportions)
- And more!
We’ll also explore the relationships among statistical distributions, including those you may already use, like the normal, t, chi-squared, and F distributions.
Note: This training is an exclusive benefit to members of the Statistically Speaking Membership Program and part of the Stat’s Amore Trainings Series. Each Stat’s Amore Training is approximately 90 minutes long.
(more…)
Often a model is not a simple process from a treatment or intervention to the outcome. In essence, the value of X does not always directly predict the value of Y.
Mediators can affect the relationship between X and Y. Moderators can affect the scale and magnitude of that relationship. And sometimes the mediators and moderators affect each other.
There are two main types of factor analysis: exploratory and confirmatory. Exploratory factor analysis (EFA) is data driven, such that the collected data determines the resulting factors. Confirmatory factor analysis (CFA) is used to test factors that have been developed a priori.
Think of CFA as a process for testing what you already think you know.
CFA is an integral part of structural equation modeling (SEM) and path analysis. The hypothesized factors should always be validated with CFA in a measurement model prior to incorporating them into a path or structural model. Because… garbage in, garbage out.
CFA is also a useful tool in checking the reliability of a measurement tool with a new population of subjects, or to further refine an instrument which is already in use.
Elaine will provide an overview of CFA. She will also (more…)