If you’ve been doing data analysis for very long, you’ve certainly come across terms, concepts, and processes of matrix algebra. Not just matrices, but:
- Matrix addition and multiplication
- Traces and determinants
- Eigenvalues and Eigenvectors
- Inverting and transposing
- Positive and negative definite
These mathematical ideas are at the heart of calculating everything from regression models to factor analysis; from multivariate statistics to multilevel models.
Data analysts can get away without ever understanding matrix algebra, certainly. But there are times when having even a basic understanding of how matrix algebra works and what it has to do with data can really make your analyses make a little more sense.
In this webinar we introduce some matrix algebra terms and methods that directly apply to the statistical analyses you’re already doing.
Note: This training is an exclusive benefit to members of the Statistically Speaking Membership Program and part of the Stat’s Amore Trainings Series. Each Stat’s Amore Training is approximately 90 minutes long.
About the Instructor
Karen Grace-Martin helps statistics practitioners gain an intuitive understanding of how statistics is applied to real data in research studies.
She has guided and trained researchers through their statistical analysis for over 15 years as a statistical consultant at Cornell University and through The Analysis Factor. She has master’s degrees in both applied statistics and social psychology and is an expert in SPSS and SAS.
Just head over and sign up for Statistically Speaking.
You'll get access to this training webinar, 130+ other stats trainings, a pathway to work through the trainings that you need — plus the expert guidance you need to build statistical skill with live Q&A sessions and an ask-a-mentor forum.
Teresa Aloba Le says
I find myself missing out on what I think would be valuable opportunities attending your courses. The biggest hindrance for me is the time zone difference. I’m based in Melbourne Australia.
Karen Grace-Martin says
Hi Teresa,
We do try to move around our trainings to accommodate different time zones. We actually have a lot of members in Australia.
erling a bringeland says
I much enjoyed your explanations to reversed signs on regression coefficients in ordinal regression, depending on statistical package used. Recently I made an unexpected discovery using SPSS. I did an ordinal regression with three categories (response= 3, stable disease= 2, progression =1) in the dependent variable. However, test of parallell lines was signficiant (p= 0.039). For some reason I recoded the dependent variable to (progression=3, stable disease = 2 and response =1). As expected, all that happened was that the ORs were flipped (The betas shifting signs), the significance level of each beta was the same, the Nagelkerke Rsq the same etc. BUT, the test of parallel lines assumption came out highly non-significant as ideally wished for (p= 0.9).
Is there any way to logically explain this difference: one way of coding seemingly leading to an invalid model, the other way not?
Karen Grace-Martin says
Hi Erling,
I can’t think of any logical reason for that. I would suggest checking with IBM support as that is very odd (pun intended).