Message/Author 

Anonymous posted on Thursday, March 07, 2013  7:47 am



I've been reading your online notes on multiple group analysis with categorical outcomes. After the configural invariance step you suggest going straight to a model where intercepts and slopes etc are constrained across groups and the factor means are freed. This is different from the continuous case where one might examine the factor loadings first and then add in the intercepts in two separate steps, and I was wondering why this would be. 


This difference between the continuous item case and the categorical items case is due to having less information with categorical items. With binary items there is an identification issue that prevents testing of loading invariance only (metric invariance), at least when allowing groupvarying residual variances. So therefore we recommend going straight from configural to scalar invariance. With polytomous items it is possible to identify and analyze the metric model. But even so, it is not as straightforward as with continuous items. Roger Millsap has written on the polytomous case; see for instance his book Statistical Approaches to Measurement Invariance. 

Anonymous posted on Friday, March 08, 2013  12:33 am



Thank you, that is extremely helpful. There's a variety of estimators that I can use in Mplus when fitting these modelsmy outcomes are binary. I recall being told once that ML is associated with the Differential Item Functioning/Item Response Theory approach, and WLSMV is associated with the CFA approach. Is that correct? 


No, that is not correct. IRT and CFA with categorical indicators are the same model. You can use either ML or WLSMV as estimators when you have categorical variables and factors. With ML, each factor requires one dimension of integration and each residual correlation also requires one dimension of integration so ML can become computationally heavy with several factors. In this case, WLSMV is preferred. Both ML and WLSMV are good for IRT as is Bayes. 

Anonymous posted on Monday, March 11, 2013  9:01 am



I'm trying to use the steps you outline for testing measurement invariance across multiple groups with categorical outcomes, within a MACS framework. Having having obtained my baseline models for each group, I have fit the configural model for each group as eg. f1 by a* b c d e; [f1@0]; f1@1; [a$1 b$1 c$1 d$1 e$1]; {a@1 b@1 c@1 d@1 e@1}; According to your outline, would the next equivalent step be a model in which the loadings and intercepts are constrained the same across groups, with the means still fixed at zero in each group, factor variances at 0 in each group, and scale factors fixed at 1 in the first group and freely estimated in the other groups? I can estimate this model (it isn't a great fit), but end up having to examine MI for factor means and slopes/thresholds simultaneously, something I would prefer not to do. I also wondered about using difftest to compare models in the invariance sequence, as when the scale factors are freely estimated I end up with more parameters in a simpler (ie loadings constrained) model than the more complex model (loadings unconstrained but scale factors fixed)? Any pointers you can offer will be much appreciated as always! 


Please see page 485 of the user's guide and the Topic 2 course handout on the website where the inputs are given under multiple group analysis. Factor variance should not be fixed to zero. If you continue to have problems, send the output and your license number to support@statmodelc.om. 


Hi, I was wondering whether it makes sense to test for residual variance invariance (as can be done in CFAs with continuous observed variables), once scalar invariance has been established in a multigroup CFA with categorical data using the Theta parameterization. Thus, after scalar invariance has been found (following the procedure as depicted on page 486 of the MPlus manual), would it make sense to estimate a third model in which the residual variances are again fixed to be one in all groups, and comparing this model to model #2 described on the top of page 486. Looking forward to your answer. 


Yes. 

Alvin posted on Thursday, May 08, 2014  1:09 am



Hi Dr Muthen, can I just clarify is the configural model one where factor loadings and intercepts are free to vary and no equality constraints are imposed across groups? I realize mplus by default holds factor loadings and intercepts equal across groups to test measurement invariance. Does this mean that in testing invariance of factor loadings alone, one has to then override the default equality constrain of intercept as the first step? 


Yes, on configural. Note that the current Mplus allows the Analysis options model = configural metric scalar; where your Model statememt simply says f by y1y10; and the rest is done automatically. 

Alvin posted on Thursday, May 08, 2014  11:30 pm



Thanks very much Dr Muthen  that's fantastic! I notice you don't get std estimates in the output using this option  is there a way around this? Also, in testing latent mean difference across groups, I constrained the factor mean (to 0) and variance of the reference group to 1 for comparison, while letting the factor mean and variance of the other group be free. This was done with equal intercepts across groups (scalar model)  and the model showed a good fit  does mean that the latent mean structure differs across groups? 


As the output says, you don't get standardized when you ask for several of configural, metric, scalar, but you get it if you do one at a time. As for your last question, perhaps you are asking if the factor means are different across groups  if so the z value for the factor mean in the second group will tell you. You should study up on our Topic 1 discussion of invariance issue; video and handout is on our website. 

Alvin posted on Thursday, May 15, 2014  11:49 pm



Thanks very much Dr Muthen I've read your notes on multigroup CFA. I've tested configural, metric, and scalar invariance, and further, invariance of factor variance and residual variance, on each of the five subscales from a measure I developed. I also looked at latent mean differences across groups. As predicted, two of the subscales tested did not pass the scalar test, that is the intercepts varied across groups. What do you do in this case? 


This means you have partial measurement invariance. Please listen to the Topic 1 course handout and video where this is discussed under multiple group analysis. 

Back to top 