Standardization for MFA with categori... PreviousNext
Mplus Discussion > Multilevel Data/Complex Sample >
Message/Author
 Sharyn L. Rosenberg posted on Tuesday, March 31, 2009 - 5:42 pm
I performed some multilevel factor analyses on categorical data. I was looking at how discriminating achievement test items were at both the student and school levels of analysis. I compared the item factor loadings descriptively, performed a "measurement invariance" type of analysis by performing scaled chi-square difference tests on models where the factor loadings were constrained to be equal across the two levels vs. estimated freely, and also looked at the size and statistical significance of the loadings at each level of analysis.

I identified the model by setting the variance to 1 at each level, as is typically done in an IRT framework. Someone has posed the question about whether it would have been more appropriate to set the first factor loading to 1, given that part of my analyses are analogous to measurement invariance. I did not do this because I was interested in evaluating all of the items (as is typical in IRT context) and also I had read that the standard errors can be inflated if the item chosen for identification has low discrimination. Also, the between-level results are potentially very different depending on whether they are standardized (stdyx) or not. What do you make of this?

I am not sure which way of identifying the the models is most appropriate given my research questions, and also whether standardization is appropriate or not. Any thoughts?
 Bengt O. Muthen posted on Wednesday, April 01, 2009 - 10:30 am
Say that factor loadings for a factor are in reality the same across Within and Between, but the factor variance is different. When you fix the factor variance to 1 on each level, the factor variance is absorbed into the loadings (increasing the loadings if the variance is > 1 and decreasing them if <1). So the factor variance difference across levels will be pushed into the loadings and they will no longer be equal despite being equal in reality. Fixing a loading at one to determine the factor metric avoids this problem. If you hypothesize that the loadings are equal across levels, you could also alternatively hold them equal with say the Within factor variance fixed at one and letting the between factor variance be free.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: