Statistically speaking, when we see a continuous outcome variable we often worry about outliers and how these extreme observations can impact our model.
But have you ever had an outcome variable with no outliers because there was a boundary value at which accurate measurements couldn’t be or weren’t recorded?
Examples include:
- Income data where all values above $100,000 are recorded as $100k or greater
- Soil toxicity ratings where the device cannot measure values below 1 ppm
- Number of arrests where there are no zeros because the data set came from police records where all participants had at least one arrest
These are all examples of data that are truncated or censored. Failing to incorporate the truncation or censoring will result in biased results.
This webinar will discuss what truncated and censored data are and how to identify them.
There are several different models that are used with this type of data. We will go over each model and discuss which type of data is appropriate for each model.
We will then compare the results of models that account for truncated or censored data to those that do not. From this you will see what possible impact the wrong model choice has on the results.
Note: This training is an exclusive benefit to members of the Statistically Speaking Membership Program and part of the Stat’s Amore Trainings Series. Each Stat’s Amore Training is approximately 90 minutes long.
About the Instructor
Jeff Meyer is a statistical consultant, instructor and writer for The Analysis Factor.
Jeff has an MBA from the Thunderbird School of Global Management and an MPA with a focus on policy from NYU Wagner School of Public Service.
Just head over and sign up for Statistically Speaking. You'll get access to this training webinar, 130+ other stats trainings, a pathway to work through the trainings that you need — plus the expert guidance you need to build statistical skill with live Q&A sessions and an ask-a-mentor forum.
mirriam ndunge says
good information.