Message/Author 

Kaja posted on Tuesday, June 14, 2005  8:32 am



Hi there, I am attempting to produce a final model which takes into account measurement invariance I have found by sex. We are working with a two factor solution, and when we regress the two factors on sex we find significant paths for both factors. Using the modification indices, we then identified several observed variables (loading on factor 2) which show significant paths to sex. When these paths from observed variables to sex were included in the model, the relationship between factor 2 and sex became nonsignificant. We then took out the path between factor 2 and sex. However, despite the fact that this path was not significant, it had a dramatic effect of the chisquare value, suggesting that the path needed to be in the model. We then added the path back in, but set it to zero. This fixed the chisquare problem. We are unsure as to why this would happen. Why are we getting such dramatically different chisquare values if the paths we remove are nonsignificant and why does including a path that is set to 0 have such a dramatic effect on the chisquare value? Thank you so much for you time, Kaja 


Taking a path out and fixing it to zero is the same thing. If you cannot resolve this, you will need to send the relevant outputs and your license number to support@statmodel.com. 


I am attempting to compare nested measurement models to test for measurement invariance. I attempted to freely estimate the models paramters for the "female" sample by using the following command: MODEL: support BY facesco@1 facesad; combat WITH support; MODEL female: support BY facesco@1 facesad; combat WITH support; However, I am receiving the following warning:THE RESIDUAL COVARIANCE MATRIX (THETA)IN GROUP MALE IS NOT POSITIVE DEFINITE. THIS COULD INDICATE A NEGATIVE VARIANCE/RESIDUAL VARIANCE FOR AN OBSERVED VARIABLE, A CORRELATION GREATER OR EQUAL TO ONE BETWEEN TWO OBSERVED VARIABLES, OR A LINEAR DEPENDENCY AMONG MORE THAN TWO OBSERVED VARIABLES. CHECK THE RESULTS SECTION FOR MORE INFORMATION.PROBLEM INVOLVING VARIABLE FACESCO. Am I specifying the variant model incorrectly, or does this warning relate some idiosyncracy about the male sample in my dataset? Thank you. 


There seems to be a problem with the variable facesco. Does this variable have a negative residual variance in the male group? This is usually the problem. 

Xuan Huang posted on Wednesday, May 16, 2007  10:03 am



Dear professors: Could you give us some suggestions on testing measurement invariance in Mplus? We want to test whether parenting measures are equivalent across mothers and fathers. Because the mother and the father are from the same family unit, the two groups in comparison are not independent. Can we take care of nonindependence across groups in multilevel, multi group CFA in MPLus? Thanks a lot in advance. 


You can do this by taking a multivariate approach where each observation has data for both mothers and fathers. You would then have factors for mothers and factors for fathers and you would place equalities on the measurement parameters to test for measurement invariance. See Example 6.14 which is a growth model and just imagine it without the growth component. 


I have a related question. How should one test for measurement invariance of a scale across two groups (defined at Level 1, e.g., male/female) when L1 units are nested in L2 units (e.g., class)? I think that a multiplegroup approach would not be ideal because male students are not independent from female students in the same class  and, to my knowledge, grouping=gender, type=complex, & cluster=class, would only adjust for dependence within each gender group, but not across gender groups. Is this correct? I also don't think that a multivariate approach would work like it does for the above mother/father scenario. Each observation at the class level would have multiple males and females as opposed to typically a single mother and father at the family level. Maybe aggregrating needs to be done? I used Mplus to test a multiplegroup SEM model with complex data (clustering and stratification). I used type=complex to address the complex nature of the data  but am wondering how dependence of units across groups is handled by Mplus and whether my tests of structural invariance constraints, which I think assume the groups are independent, are biased? Thanks, Scott 


It is true that if clusters contain both males and females, the males and females are not independent groups. With TYPE=COMPLEX in Mplus, an adjustment has been made to take this lack of independence into account. 


So you are saying that Mplus (with type=complex) not only accounts for clusterbased dependence within groups (e.g., males and females), but also clusterbased dependence between gender groups in a multiple group analysis? Then with Xuan Huang's situation involving mothers and fathers above, is it appropriate (as an alternative to the multivariate approach) to use a multiple group model (group = parent) with type=cluster  which would typically result in a single observation for each cluster within group? If so, should this lead to equivalent results with the multivariate approach you suggested? Thanks Linda! 


Yes. The multiple group approach should yield approximately equivalent results. 


Hi Linda and/or Bengt, I am having a problem testing for invariance of a secondorder CFA with 3 firstorder factors and one secondorder factor using robust estimation. Using ordinary ML, we encounter no problems though the CFI is lower than what one would like so we wanted to see what the Robust estimate of CFI looked like. When we run the same exact invariance model (across whites vs minorities) we receive a error message saying that the model is not identified due to a problem involving parameter 77. Parameter 77 is in the value in the Alpha vector for the secondorder factor in the minority group. Let me know if you need me to send you our input file (and data file). Thanks! Rick Zinbarg 


So you are saying when you change the estimator from ML to MLR you get the error message? 


right 


Please send your input, data, output, and license number to support@statmodel.com. 


Greetings Linda, Todd Little and his colleagues propose the "effects coding" method to identify MACS models in various papers. This method involves constraining the factor loadings to average 1 in each group (for each factor) and the intercepts to sum to 0 in each groups (for each factor again)Everything else is freely estimated. Is it possible to implement this in Mplus (I believe so) ? If it is, how would you implement it in Mplus ? Thanks a lot ! 


You should be able to do this using MODEL CONSTRAINT. 


Hi again, Thats what I thought. But I'm not overly familiar with this Mplus command (always got me a bit confused on how to use it...), could you give me a little push ? Thanks in advance. 


Why don't you give it a try and if you fail send your input, data, output, and license number to support@statmodel.com. 


Got it! thanks! More simple than I thought (If I'm right  if not correct me). For a single group, that would give: MODEL: f1 BY y1* (c1) y2 (c2) y3 (c3); f2 BY y4* (c4) y5 (c5) y6 (c6); [y1] (c7); [y2] (c8); [y3] (c9); [y4] (c10); [y5] (c11); [y6] (c12); [F1 F2]; MODEL CONSTRAINT: c1 = 3  c2  c3; c4 = 3  c5  c6; c7 = 0  c8  c9; c10 = 0  c11  c12; 


The best way to tell if an input produces what you want is to run it and look at the results. 


Yes, sorry I did not specify it. It does run correctly and provide fit indices equal to those obtianed under different constraints (marker variables, latent standardization). I was just wondering if t could be simplified. For instance, I tried f1 BY y1* y2 y3 (c1c3); and it did not work (told me I had more constraints than variables). But this way, everything is alright. Thanks again. 


You can use a list of labels only with a list of variables. This is why it did not work. 


Thank you very much Linda! For those who followed this discussion, the previous input can thus be simplified to (and it work): MODEL: f1 BY y1* (c1) y2y3 (c2c3) ; f2 BY y4* (c4) y5y6 (c5c6); [y1y6] (c7c12); [F1 F2]; MODEL CONSTRAINT: c1 = 3  c2  c3; c4 = 3  c5  c6; c7 = 0  c8  c9; c10 = 0  c11  c12; 


Hi, I have recently done some multigroup confirmatory factor analysis models to test for measurement invariance across three groups. I tested for factor loadings, intercepts, and residual variances, but while invariance held for the first two, it did not for residual variances. I know that measurement invariance requires the three to hold, but what does the above mean in terms of the interpretability of the estimates? Because the intercept and the loadings are equal, can the estimates be compared across groups? Is it just that the precision is different? What limits does this pose on comparative analyses? Thanks! 


We are not of the school that residual variances need to be invariant. 


Dear Mr and/or Ms Muthén, I am having a problem checking for measurement invariance of Demand Control Questionnaire in hospital workers of Brazil and Sweden using multiple group analysis. When I performed Confirmatory Factor Analysis for each country separately, using WLSMV estimator for categorical variables, I found that the best fit model was with 3 factors: D1 by i1–i5, D2 by i6i8 and D3 by i9i10) for both countries, but with different crossloadings (Brazil: D1i6 and Sweden: D1i8). Is it possible to proceed with multiple group analysis? Do these models have equal factorial structure? When I tried multiple group analysis, not considering the crossloadings, first of all, I fixed the highest loadings of each dimension in 1. After that, I used the default of Mplus (the loading of the first item of each dimension). However, using this procedure I couldn’t check equal loadings for the items I have fixed in 1, so I repeated the procedure fixing each factor variance in 1, but the results were totally different. Was it correct? What procedure should I use? And why do the results differ? Thanks in advance, Yara 


If you have one factor loading fixed to one and the factor variance free or the factor variance fixed to one and all factor loadings free, you should get the same chisquare value. If you do not, please send your full outputs and license number to support@statmodel.com. 


Thanks, Linda. The chisquare value are very similar, but not equal. I will send you the outputs. Is it correct to fix the factor variance in 1 to check equal loadings? And, how about the factorial structure? Do you think I should proceed? Thanks again, Yara 


Is it possible to assess measurement invariance over two time points controlling for covariates (e.g. comorbidities)? Could you suggest to me any references for this? 


I don't know of a reference but we show how to do this in our Topic 4 handout in the example for multiple indicator growth. 


Thank you for your reply. But, will it be possible to estimate response shift using Oort or Schmitt's approach when controlling for covariates (e.g. comorbidities)? Also, I have data for two time points. 


I am not familiar with either Oort or Schmitt's approach. You can test for measurement invariance across time for two or more timepoints using the inputs shown in the Topic 4 handout. 


When intercepts are fixed by default how can I free them in longitudinal CFA (6 times/ 4 indicators)? When I do as recommended in the manual I get the error message: THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE COMPUTED. THE MODEL MAY NOT BE IDENTIFIED. ... and problems with the alphaparameter. Thanks 


Intercepts are not held equal across time as the default. Are you treating each time point as a group in a multiple group analysis? 


Thank you for your response... Do you recommend to treat time points as multiple groups? This is my unrestricted model which has 6 latent variables (times 1 to 6) model: sns1 by lxscl07 lxscl16 lxscl35 lxscl62 ; sns2 by lzscl07 lzscl16 lzscl35 lzscl62 ; sns3 by lqscl07 lqscl16 lqscl35 lqscl62 ; sns4 by lvscl07 lvscl16 lvscl35 lvscl62 ; sns5 by lwscl07 lwscl16 lwscl35 lwscl62 ; sns6 by lbscl07 lbscl16 lbscl35 lbscl62 ; in a next model I added residual covariances, then tested altered items to be fixed. Then I constrained factor loadings to be invariant over time. But for a test of strong factorial invariance I have to fix the intercepts to be invariant over time...How can I do this? 


I think I got it: [lxscl07 lzscl07 lqscl07 lvscl07 lwscl07 lbscl07] (5); [lxscl16 lzscl16 lqscl16 lvscl16 lwscl16 lbscl16] (6); [lxscl35 lzscl35 lqscl35 lvscl35 lwscl35 lbscl35] (7); [lxscl62 lzscl62 lqscl62 lvscl62 lwscl62 lbscl62] (8); 1 to 4 were the factor loading constraints. Thanks anyway! 


Time points should not be treated as groups. They are not independent. Testing for measurement invariance across time is shown in the Topic 4 course handout starting with slide 78. 


Hi, I am running into a puzzling result when trying to compare configural versus metric invariance of factor loadings across groups. I have a model that is essentially configurally invariant (I had to constrain a few loadings to be equal across groups to prevent some Heywood cases) and am comparing it with a metric invariant model in which all the loadings are constrained to be equal across groups. My understanding was the configural invariant model could not have a larger chisquare than the metric invariant model but this is precisely the result I am getting. Is my undestanding incorrect or does this seem odd to you too? Thanks! Rick Zinbarg 


The more restrictive model should have the higher chisquare. It would be impossible to say more without seeing both outputs and your license number at support@statmodel.com. 


Hi Linda, I am doing a multigroup invariance test. When I constrained parameters (factor loadings or residual variances or factor correlations or all three together), only the unstandardized parameters are constrained (having equal values for the two groups), whereas standardized parameters remain different in values Am I doing the right thing? Really appreciate your advice. 


The standardized coefficients will be different even when the unstandardized coefficients are equal because the standardization is done using the standard deviations for each group not the overall standard deviations. 


Hi Dr. Muthen, I am working on cross validating a Second Order CFA with MPlus. I was wondering if you would be able to guide me through this process. Do you have any suggestions on the preferred order for imposing constraints on a 2nd order CFA when you start with a fully nonconstrained model (intercepts & loadings free across groups – but means & intercepts of latents set to zero for identification purposes). Thank you so much for your help, Vandita 


I would test the first order factors first using the strategy shown in the Topic 1 course handout on the website. See multiple group analysis. Once measurement invariance is established for the firstorder factors, I would test if for the secondorder factor. 


I have tested for measurement variance among the first order factors. They cross validated well now I am planning to do the second order CFA. Are the steps in measurement invariance similar to that in first order CFA? I believe the means of the first order should be set to 0, is that right? Step 1 fully nonconstrained model Step 2 constrain factor loadings step 3 constrain intercepts and loadings step 4 constrain intercepts, loadings, and residual variances step 5 constrain intercepts, loadings, and residual variances and error variances step 6 constrain intercepts, loadings, and residual variances, error variances, and covariances Do these look correct? Thanks, Vandita 


Yes. 


Dear Prof. Muthen, I am running MGCFA with a four factormodel where two factors have only one indicator. I have already checked for the metric equivalence and got an acceptable model fit. In the next step by checking for the scalar invariance, the modification indices suggest to free the intercepts of two indicators (y8, y9), both loading on the same factor. If I do so I get this error message „THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE COMPUTED. THE MODEL MAY NOT BE IDENTIFIED. CHECK YOUR MODEL. PROBLEM INVOLVING PARAMETER 48“. The parameter 48 is the alphavalue for my latent construct perform. Would you have any suggestions how I can solve this problem? Here is my syntax: MODEL: trust BY y1 y2 y3 y4 y5; y2 with y3; y1 with y2; diffreg BY y6; y6@0; gemidnt BY y7; y7@0; perform BY y8 y9; Model west: [y8 y9]; Thank you in advance! 


You cannot have the measurement intercepts for all (both) your factor indicators be free and also the factor mean. Fix the factor mean at zero for perform in the west group. 


thank you very much for you helpful response! 


Hi, I have a multigroup CFA model with UVI identification. I first estimated loadings of items freely in each group, then constrained them to be the same across the two groups, to test for support for measurement invariance. So, in the STANDARDIZED solution for the constrained model, all items have the same loading except for the first item for each factor, which shows up as 1.00 in the first group, with no significance associated with it; but as a real estimate with a significance associated with it for the second group. My question is: Why is the first item per factor set at 1.000 for the first group in the STANDARDIZED solution for the constrained model, especially if I used UVI and not ULI identification? For example, the standardized solution for the constrained model shows the following. For boys: WITHDRW BY ALONE_D 1.000 0.000 999.000 999.000 NOTLK_D 0.482 0.068 7.087 0.000 SECRE_D 0.678 0.041 16.588 0.000 SHY_D 0.337 0.052 6.485 0.000 LACKE_D 0.509 0.043 11.712 0.000 SADDP_D 0.841 0.040 21.232 0.000 WITHD_D 0.711 0.044 16.161 0.000 For girls: WITHDRW BY ALONE_D 0.582 0.072 8.025 0.000 NOTLK_D 0.482 0.068 7.087 0.000 SECRE_D 0.678 0.041 16.588 0.000 SHY_D 0.337 0.052 6.485 0.000 LACKE_D 0.509 0.043 11.712 0.000 SADDP_D 0.841 0.040 21.232 0.000 WITHD_D 0.711 0.044 16.161 0.000 


Actually, I think I figured out the answer to my own question, so you can ignore the previous post! I think the answer is that even when constraining the loadings to be the same across the groups, I still need to have a * (an asterisk) for the first loading for each factor, in order for UVI and not ULI identification to be used! 

anonymous posted on Wednesday, March 21, 2012  9:30 am



I am testing measurement invariance of a measurement model across two different age groups. However, when I restrict the factor loadings to be invariant across groups using stratified, weighted, and clustered data (WLSMV estimator) with categorical ordinal response scales, I receive the following error: THE MODEL ESTIMATION TERMINATED NORMALLY THE CHISQUARE COMPUTATION COULD NOT BE COMPLETED BECAUSE OF A SINGULAR MATRIX. THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE COMPUTED. THE MODEL MAY NOT BE IDENTIFIED. CHECK YOUR MODEL. PROBLEM INVOLVING PARAMETER 136. THE CONDITION NUMBER IS 0.376D15. In addition, I noticed that I obtain somewhat different fit index estimates depending on whether I use Mplus 5.2 or 6.1. Any reason? 


The difference between the fit indices is because in Version 6 a new method for secondorder chisquare adjustment for WLSMV, ULSMV, and MLMV resulting in the usual degrees of freedom was introduced. Regarding the other problem, please send your output and license number to support@statmodel.com. 


I am trying to test for partial scalar invariance by freeing the intercepts on the items in 3 of my 6 factors, but I am getting the following error " THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE COMPUTED. THE MODEL MAY NOT BE IDENTIFIED. CHECK YOUR MODEL. PROBLEM INVOLVING PARAMETER 106." Here are my model statements MODEL: f_infr by sidewalk@1 parkdcar grass_strip streetlights; f_conn by intrsctn@1 altroute strght_st; f_aest by trees@1 intrstng_thg natrlsight attrctv_bldg; f_traf by traffic@1 trffcspd_slow trffcspd_fast; f_pers by crime_high@1 walk_unsafed walk_unsafen; f_acce by walkstores@1 walkplaces walktobus; MODEL Massachusetts: [sidewalk parkdcar grass_strip streetlights walkstores walkplaces walktobus traffic trffcspd_slow trffcspd_fast]; !intercepts allowed to vary Do I need to do fix the latent means @0 for these three factors in the Massachusetts groups? 


When intercepts are free, factor means must be fixed at zero. See the Topic 1 course handout under Multiple Group. All of the inputs for testing measurement invariance are given. 

Hans Leto posted on Monday, April 23, 2012  11:44 am



Dr. Muthen. I am having problems testing measurement invariance with 2nd order factors. I am following the procedure described in the handout number 1. I do not know how to include a 2nd order factor in the example described in the slide 210. Could you provide me more guidance. I describe an example (F3 is the 2nd order factor): Model: f1 by y1y5; f2 by y6y10; F3 by f1 f2; [f1f2@0] Model g2: f1 by y2y5; f2 by y7y10; F3 by f1 f2; [f1f2@0] [y1y10] This syntax gives me an error. Thank you in advance. 


I suspect the problem is that a secondorder factor is not identified unless it has three or more firstorder factors. 

Hans Leto posted on Monday, April 23, 2012  12:51 pm



Thank you but it was just an example, the real one has more than three factors. Is the syntax correct? fixing to 0 the first order factors in both groups? It is just like the one showed in the slide. 

Hans Leto posted on Monday, April 23, 2012  12:58 pm



This will be more like the real one. Model: f1 by y1y5; f2 by y6y10; f3 by y11y15; f4 by y1620; F5 by f1 f2 f3; ¡2nd order just by f1f2f3 [f1f4@0] !all 1st order fact. fixed to 0 Model g2: f1 by y2y5; f2 by y7y10; f3 by y12y15; f4 by y1720; F5 by f1 f2 f3; [f1f3@0] !just 1st factor of the 2nd order fixed to 0 [y1y20] 


It isn't clear if you get a syntax error or a modeling error. I'll address both. You may get a syntax error by your statements [f1f3@0] !just 1st factor of the 2nd order fixed to 0 [y1y20] because you don't end them with semi colons. On the other hand, what you are posting may not be what you use in your run. You will get a nonidentification error because your second group frees up the intercepts for your y's which means that the factor mean difference in the secondorder factor cannot be identified. Leave out the statement [y1y20]. and the default will give you the correct equality across groups of these intercepts. 

Hans Leto posted on Tuesday, April 24, 2012  2:33 am



Thank you for your answer. But it does not work, is not a syntax error (sorry). It is an error about "THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE COMPUTED." I am quite new in testing invariance. My problem is specifying the 2nd order factors, because I already tested it only for 1st factors and ran perfectly. I left out the [y1 y20], but it did not work. My questions would be: 1. Do I have to fix to 0 the 2nd order factors in the general model or only the 1st order(in my example I only fixed to 0 the 1st order factors)? 2. In the specific group (g2)do I have to fix to 0 the 1st order factors (f1f3) of my 2nd order? 3. Do I not have to free up the intercepts for the items in g2([y1y20])?. I have tried all these but still gives me the same error. I do not know what I am missing. Thank you very much. 


We can guide you better if you send your full output to Support. 


Hello: a quick question re: testing for configural invariance (equal form) that may have a simple explanation: when I test for equal form with both groups, my df and chi square do not equal the sum of the df and chisquare when I test each group separately (using USEOBSERVATIONS). As an example my df for both groups is 13, yet my df in the model for equal form = 31, not 26. Is this due to a misspecification somewhere on my part? As an aside, rather than use a marker indicator I am fixing variances to 1 and I was wandering if this might make a difference though I doubt it. Group sizes are 131 and 128. 


Which estimator are you using and which version of Mplus? 


I am using Mplus v. 6.12 and estimator = ML. 


Please send the relevant outputs and your license number to support@statmodel.com. 


Hello, I'm hoping to double check that I'm using the correct code for testing measurement invariance of some scales across different racial/ethnic groups. I've been using the video and handouts from Topic 1, but I'm trying to test invariance across 3 groups instead of 2. Would you mind confirming that I've got the correct code for the second test without invariance? Model: BE by m_a3 m_a31 m_a28 m_b50; EE by m_a62; m_a62@0; CE by m_a72 m_a74 m_a79 m_a80; [BE@0 EE@ CE@0]; Model AfAm: BE by m_a31 m_a28 m_b50; EE by ; CE by m_a74 m_a79 m_a80; [m_a3m_a80]; Model Latino: BE by m_a31 m_a28 m_b50; EE by ; CE by m_a74 m_a79 m_a80; [m_a3m_a80]; Thanks! Sarah 


I would remove EE BY in each groupspecific MODEL command. With one indicator, you can't test for measurement invariance. 


Thank you! 


Hello I have a question concerning measurement invariance, specifically when testing for equal latent variances and latent means (i.e., population heterogeneity). If one chooses to freely estimate all indicators in baseline models (and ID'ing the model by setting the latent variances to 1), is it the case that one must use a separate baseline model in which marker indicators at 1 and variances are freely estimated if one wants to subsequently test for population heterogeneity (i.e., the baseline model must have variances freely estimated to subsequently test for equal latent variance(s))? Thanks in advance! 


I think you are asking if it matters whether you set the metric of the factors by having a loading fixed at one or the factor variance fixed at one when you later compare structural parameters. If you fix the factor variances to one, you need to do this in only one group so the test of whether the factor variances are different across groups is a test of factor variances one in one group and free in the others verus factor variances one in all groups. 


Thanks Dr. Muthen. I will try to articulate my question more clearly: In my case I have a 2factor model. I identify the model by setting the factor variances to 1 rather than the marker indicators for both groups. In other words, in the equal form solution my variances are already fixed to 1 in both groups (to ID the model) so no meaningful comparison could be made via the chisquare test of model fit for the subsequent test of invariant factor variances across groups, right? So to test for factor invariance, I would use a baseline model instead where I ID'd the model by using marker indicators rather than variances? 


In multiple group analysis, you need to fix the factor variances to one in only one group. They can be free in the other groups. A meaningful test of whether the variances differ across groups is a test of factor variances one in one group and free in the others verus factor variances one in all groups. 


Hi, I am working on a multigroup CFA for testing measurement invariance across 5 samples. The hypothesized model is a two order factor model. Aiming at testing metric invariance, the following syntax failed to work. What is the problem? Thank you for your advice. Aaron DATA: FILE IS data.prn; VARIABLE: NAMES ARE e1e3, m1m4,d4d5,d9, g; USEVARIABLE ARE e1e3, m1m4,d4d5,d9, g; GROUPING IS g (1=g1, 2=g2, 3=g3, 4=g4, 5=g5); MODEL: D BY d4*, d5, d9; D@1; [D@0]; E BY e1*, e2e3; E@1; [E@0]; M BY m1*, m2m4; M@1; [M@0]; G BY D E M; Model g1: [e1e3]; [m1m4]; [d4d5]; [d9]; !allow the intercepts to differ Model g2: [e1e3]; [m1m4]; [d4d5]; [d9]; !allow the intercepts to differ Model g3: [e1e3]; [m1m4]; [d4d5]; [d9]; !allow the intercepts to differ Model g4: [e1e3]; [m1m4]; [d4d5]; [d9]; !allow the intercepts to differ Model g5: [e1e3]; [m1m4]; [d4d5]; [d9]; !allow the intercepts to differ OUTPUT: STANDARDIZED; 


When the intercepts are free, all factors means must be zero. The mean of g is not fixed at zero. 

Fred Danner posted on Friday, February 15, 2013  11:33 am



Hi, I am testing secondorder measurement invariance, using MLR estimation. Unconstrained model gives reasonable results. Model constraining factor loadings runs fine but cuts the N in each group in half! Why?? UNCONSTRAINED Model: f1 by x1  x6; f2 by x7  x10; f3 by x11  x13; f4 by f1 f2 f3; [f1  f3 @0]; Model g2: f1 by x2  x6; f2 by x8  x10; f3 by x12  x13; [x1  x13 f1  f4 @0]; FACTOR LOADINGS CONSTRAINED Model: f1 by x1  x6; f2 by x7  x10; f3 by x11  x13; f4 by f1 f2 f3; [f1  f3 @0]; Model g2: [x1  x13 f1  f4 @0]; 


We have to see the 2 outputs to be able to tell. Please send to Support. 

Fred Danner posted on Friday, February 15, 2013  1:57 pm



I have done so, but please do not waste your time poring over it  I am embarrassed to say that I inadvertently listed one extra variable on my input list. Removing this variable fixed the problem. 


Dear Drs. Muthen, I have two questions about using TYPE=CLUSTER in data sets that have repeated observations of the same individuals. 1) In one data set, I have measures at two time points, six years apart. I am entering data into MPLUS in the long format and specifying my DV as a latent variable, which is regressed on age. I use TYPE=COMPLEX and cluster on subject ID. Is there a name for this sort of analysis? 2) In another data set with 1 to 7 repeated measures of the same individuals, I wanted to compare age groups' (adolescent vs. adult) means on a given variable, even though they are the same individuals. I ran a simple regression with the age groups entered as dummy variables. Again, I imported the data in long format and used TYPE=CLUSTER to cluster on subject ID. Is there any reason that it would be incorrect to draw inferences about the mean differences between the age groups based on this regression? 


P.S. I did not mean to write "TYPE=CLUSTER" in the first sentence of the above post, but rather TYPE=COMPLEX (in conjunction with a CLUSTER command). 


1. I know of no special name for this model. It is a latent variable model. 2. This sounds okay. 

Tom Booth posted on Saturday, March 09, 2013  10:42 am



Linda/Bengt, I am trying to fit a secondorder invariance model with categorical indicators using the delta method for 2 groups. I was interested in following the suggestion of Chen, Sousa and West (2005) and testing invariance in the following order; 1: Configural 2: 1st order metric (loadings) 3: 2nd order metric (loadings) 4: 1st order scalar (thresholds) 5: 2nd order scalar (intercepts) Where the following constraints are used across groups in each model: 1: First and second order loadings free in both groups (first item/factor loadings fixed to identify). Item thresholds free in both groups. First and second order factor means fixed to 0 in both groups. Scale factors fixed at 1 in both groups. 2: As (1) but with first order loadings constrained equal. 3: As (2) but with second order loadings constrained equal. 4: As (3) but with item thresholds constrained equal, first order factor means free in group 2, and scale factors free in group 2. 5: As (4) but with second order factor mean free in group 2 and first order factor means constrained equal. I am not sure if this sequence is correct and after noting discussion here and notes on the Mplus site on the Millsap and Tien (2004) paper, I fear I have missed something crucial. Any guidance on the matter would be much appreciated. Tom 


There are different approaches for binary and polytomous items. With binary items, Step 2 adds scale factors differences across groups which makes the model not identified when the thresholds are different. With polytomous items, the MillsapTien approach can be followed. 

Tom Booth posted on Saturday, March 09, 2013  11:35 am



Hi Bengt, Thank you for the very swift response. Just for clarity, my items are polytomous. From your response, I take it that in principal there is no issue following the Chen, Sousa and West sequence, so long as the identification constraints of MillsapTien are followed, and that these are different to the basic model specs I note above? Best Tom 


Yes, the steps you list look fine. 

Tom Booth posted on Sunday, March 10, 2013  3:35 am



Thanks Bengt. I had thought from the discussions that with the categorical nature of the data and use of WLSMV, loadings and thresholds needed to be considered together, not split as in the above stages. Tom 

Tom Booth posted on Sunday, March 10, 2013  4:52 am



Bengt, Sorry, I have a further follow up question. Within the sequence of models above, when thresholds are constrained across groups, scale factors are freed in the second group. I have 3 questions on this; 1 Is this correct? 2 Is this necessary? 3 If one then subsequently releases thresholds, partial invariance, do the associated item scale factors need to be fixed again? Best Tom 


Re your 3:35 post: Loadings and thresholds are considered together in the binary case. Re your 4:52 post: 1. Scale factors are needed whenever you make comparisons of the factors, that is, in the metric and scalar cases. 2. Yes, because scale factors contain 3 things: Loadings, factor variances, and residual variances. So even when loadings are invariant, scale factors won't be  in particular you want to take into account the factor variance variation across groups. 3. You fix scale factors in the configural case because in that case you are not comparing factors across groups. 

Tom Booth posted on Sunday, March 10, 2013  1:49 pm



Thank you very much. 


Hello Linda and Bengt, I am doing a 4 group test of measurement invariance with ordered categorical items (4point response set). The measure is invariant on loadings, but not on thresholds. I would like to examine specific contrasts (ethnicity within gender and gender within ethnicity). Do you know of any problems with using the MODEL CONSTRAINT command to simultaneously examine threshold differences across items per my contrasts of interest? I am thinking it may simplify the analyses. I could not find an example in mplus examples or the literature... Thanks in advance for your help. Michelle L. 


It makes senses to do such testing. 


Hi, I'm wondering how to specify a multiplegroup model where some groups have a subset of the total items in a scale. A simplified example results in errors: MODEL: F1 BY V1 V2 V4 V5 (1); MODEL G2: F1 BY V3 (1); With the errors: Variable is uncorrelated with all other variables: V3 Group G1 does not contain all values of categorical variable: V3 


See our FAQ: Different number of variables in different groups 


Fantastic, Bengt. Thank you! Leslie 


Hi again. I have a follow up to the last question. I followed the FAQ and used DEFINE: IF (GRP EQ 2) THEN VAR4 = _MISSING; MODEL: f1 BY var1 var2 var3 var4 ; [f1@0]; f1@1; var4 (2); MODEL GRP2: f1 BY var1 var2 var3 var4 ; [f1]; f1; var4 (2); This still produces the error that var4 has no nonmissing values in group 2. Am I missing a step? Thanks, Leslie 


Try VARIANCES = NOCHECK in the DATA command and if that doesn't resolve it, send files to Support. 

marlies posted on Tuesday, October 15, 2013  6:49 am



Dear Linda and Bengt, My question is the following, which has been asked before: I would like to test for measurement invariance using the difference in McDonald's noncentrality index (NCI) as recommended by Meade et al (2008) in "Power and Sensitivity of Alternative Fit Indices in Tests of Measurement Invariance" J Appl Psych. You (Linda) replied that Mplus does not give an NCI index. However, since my sample is very big, I really would like to report it next to the CFI. Do you have any formula or idea how I can derive de NCI index (maybe from other given fit indices)? Thank you in advance for your answer! Kind regards, Marlies 


I would Google this to find the formula and then see if the information for computing it is available in the Mplus results. You may want to ask on a general discussion forum like SEMNET 

marlies posted on Wednesday, October 16, 2013  12:31 am



Thank you! Kind regards, Marlies 

marlies posted on Tuesday, October 29, 2013  9:48 am



For all Mplus users who would like to know McDonald's NCI as well: I figured out how to calculate it (by hand) from the Mplus output. The formula is as follows: exp(0.5*((Chisquare of the target model  DF of the target model)/(N1))) Where 'exp' = exponantial (you can calculate this by using for example this website http://keisan.casio.com/exec/system/1223447896 You calculate the formula twice: once for you Configural Invariance model and once for your Measurement Invariance model. Then, you substract the value of the CI model from the value of the MI model. This is the final Mc Donald's NCI difference value you can report. The cutoff value for an invariant model differs per number of factors and items. In the article of Meade and Johnson (2008) you can find at page 586 a table with these cutoff points. (Meade, AW & Johnson, C (2008). Power and sensitivity of alternative fit indices in tests of measurement invariance. Journal of applied psychology, 93, 568592) 


Thank you so much for sharing that information. 

Ian Koh posted on Friday, December 13, 2013  12:26 am



Dear Bengt and Linda, I ran a test for factorial invariance (sixfactor structure, with partial measurement invariance) across two groups following the steps mentioned in Byrne (2011). Out of curiosity, I'd like to ask: Are the configural model's parameters estimated using the whole sample, or are they estimated from the group samples? Thanks for your help. Best, Ian 


The model for each group is estimated using the data from that group. 

Ian Koh posted on Monday, December 16, 2013  4:38 pm



Thanks Linda! This question follows from my previous post (dated Friday, 13 December 2013). Before fitting the configural model, I first fitted two baseline models: one for 5yearolds and one for 6yearolds. The 5yearold group didn't require any modifications to the original model specification; however, the 6yearold group required one extra cross loading, else there would've been a nonpositive definite matrix message. My configural model converged without any issues when including the extra cross loading for the 6yearold group (as expected). However, the configural model also converged without any nonpositive definite matrix message when the cross loading was removed. I also tested for factorial invariance over gender using the same model specification, encountering the same issue for the gender baseline models. (Namely, that the female group required one extra cross loading so that its solution would not have a nonpositive definite matrix error, while the male group required no modifications.) What puzzles me is that this nonpositive definite matrix issue was replicated in the gender configural model simply by removing the extra cross loading for the female group, but specifying the cross loading resulted in an admissible solution. Why do these two configural models behave differently? Thanks again, Ian 


Please send outputs and data if possible. Let's focus on the 5 vs 6 year old runs, so send the 6year old separate run with and without the crossloading and the 2group run of 5 and 6 year olds with and without the crossloading. 

Ellyn L. posted on Thursday, February 06, 2014  12:25 pm



Drs. Muthen and Muthen, I am conducting a multiple group analysis and need to asses mean invariance. I have written syntax that runs successfully, but I'm not sure that I'm including (all of) the correct code. I have consulted both the Mplus user guide and blog posts, and I am looking for some confirmation/input on the syntax I am using to assess mean invariance. I have included the Model input information below. Thanks so much. ANALYSIS: ESTIMATOR = MLR; MODEL: I ON M So; Su ON I P N; Sh ON I Su So P N; D ON Sh; E ON Sh; M WITH So P N; So WITH P N; P WITH N; MODEL B: [ME @0]; 


If you want to test if means are equal across groups, compare the model where they are free across groups to the model where they are held equal across groups. In a conditional model, it is the intercepts that are model parameters not the means. 


Drs. Muthen and Muthen, I am conducting a multiple group confirmatory factor analysis with three comparison groups. The observed variables are categorical. I am using the Theta parameterization. The focus of the analysis is to test for construct invariance between the three groups. I currently have the factor variance, factor loadings and thresholds set to be estimated and equal between all groups (varying within groups, constrained between groups). I would like to do the same for the residual variances. However, as you know, when I use the Theta parameterization, the residual variances for the omitted group are set to 1. This means that to estimate the sought model of construct invariance, I must set the residual variances in the two comparison groups to 1. I have done this. My question is: When I set the comparison group residual variances to 1, the values for the "Est./S.E." for the residual variances for the two comparison groups reads "Infinity." Is this a problem? Is there a fix for this or a work around? The output contains no fatal error reports. 


Please send your output and license number to support@statmodel.com. 


Drs. Muthen and Muthen, I am running a multigroup CFA on the nutrition selfefficacy scale and want to test the measurement invariance across 9 coutries. I have followed the handout/video for topic 1 and done the following syntax: Usevariables are Q3r1 Q3r2 Q3r3 Q3r4 Q3r5 weight; Grouping is Country (1 = Norway 2 = Germany 3 = Spain 4 = Greece 5 = Poland 6 = UK 7 = Irel 8 = NL 9 = Portugal); weight is weight; Analysis: estimator = MLR; Model: NSE BY Q3r1Q3r5; [NSE@0]; Model Norway: NSE BY Q3r2Q3r5; [Q3r1Q3r5]; Output: standardized; modindices (3.84); My question is how do I add in the other countries. Should I continue to add the syntax in for Model Germany, Spain etc the same as Model Norway? Thanks Audrey 


See the Version 7.1 addendum on the website with the user's guide. There are convenience features for testing measurement invariance across groups. You may find them helpful. 


I have saw the convenience features for 7.1, but current am working from version 7. 


You would do the same thing you have done for Norway for each group. 


Following on from my previous question, a collegue suggested this: Test for the equality of the loadings, still allowing item and factor intercepts to vary. Then test for the equality of the item intercepts, still allowing factor intercepts to vary. Finally test for the equality of the factor intercepts. How do I allow factor intercepts to vary across countries, how would this look in syntax? MODEL: SEffic BY Q3r1@1; SEffic BY Q3r2* (p2); SEffic BY Q3r3* (p3); SEffic BY Q3r4* (p4); SEffic BY Q3r5* (p5); [Q3r1*] (p6); [Q3r2*] (p7); [Q3r3*] (p8); [Q3r4*] (p9); [Q3r5*] (p10); [SEffic@0]; Q3r1 WITH Q3r2* (p12); Q3r1*; Q3r2*; Q3r3*; Q3r4*; Q3r5*; SEffic*; MODEL 2: SEffic BY Q3r1@1; SEffic BY Q3r2* (p2); SEffic BY Q3r3* (p3); SEffic BY Q3r4* (p4); SEffic BY Q3r5* (p5); [Q3r1*] (p6); [Q3r2*] (p7); [Q3r3*] (p8); [Q3r4*] (p9); [Q3r5*] (p10); [SEffic*]; Q3r1 WITH Q3r2* (p12); Q3r1*; Q3r2*; Q3r3*; Q3r4*; Q3r5*; SEffic*; Many Thanks, Audrey 


See the Topic 1 course handout on the website where the input files for testing for measurement invariance are shown under multiple group analysis. 

Amy Walzer posted on Tuesday, August 26, 2014  7:35 pm



Hello, I have tested measurement invariance using the new convenience features in MPlus (i.e., ANALYSIS: MODEL=CONFIGURAL METRIC SCALAR) to see if there is measurement invariance in my measure between men and women. Now, I'd like to go on to test other types of invariance outlined by Steinmetz et al., 2009: 1.) Invariance of error variance 2.) Invariance of factor variance 3.) Latent value means 4.) Factor covariance How do I go about doing this? When I try to build the syntax so that it constrains the necessary elements (e.g., error variances, the variance of each of the factors) in my second group (women) to equal my first group (men), I get an error stating "Model did not terminate normally. Refer to TECH9 output for more information." Thanks much. 


Please send the output and your license number to support@statmodel.com. 

David Vachon posted on Saturday, January 03, 2015  10:33 am



Hello, I am a novice Mplus user and I am trying to test for measurement invariance on a model with censored (from below) indicators. I have 12 child maltreatment indicators loading on 4 latent variables. The User Guide has a nice stepbystep description of measurement invariance procedures for continuous and categorical outcomes (pp. 484486), but nothing for censored outcomes. I have been trying to base my models on these recommendations, but I do not know enough about Mplus or censored analysis to feel confident that I am on the right track. Could you outline the steps used to test for measurement invariance with censored outcomes (i.e., what should I fix or free at each step)? I have been using the WLSMV estimator. Kind regards, David 


You would use the same approach as for continuous outcomes. 


Dear Muthéns, Considering a invariance testing for unidimensional model, the estimation terminated normally and the scalar against configural model returned p = .0793. For the configural and scalar models X² pvalues are higher 0.05 and RMSEA lower than 0.06; however, CFI and TLI were below than 0.95 reducing to 0.87 in the scalar level test. I found that "... CFI will keep decreasing as a model becomes more restrictive" in invariance testing and the authors concluded that it would be not useful for such purpose (Hong et all., 2003 Educational and Psychological Measurement, Vol. 63 No. 4, August 2003). Would be the same applied to TLI? Best wishes, Hugo 


You may want to ask this general question on SEMNET. 

Eric Deemer posted on Wednesday, February 18, 2015  11:16 am



Hello, I'm using the new convenience feature to test the invariance of a CFA model across 3 groups. How does one know which groups are being compared when there are more than 2 groups? Would I use the ALIGNMENT option here? Also, with the convenience feature is it no longer necessary to calculate the chisquare difference value by hand? The website says chisquare difference testing is carried out automatically with version 7.1, but I still get a warning in my output saying that the chisquare value cannot be used in the regular way. Eric 


All 3 groups are being compared. There is no need for alignment. This is a warning that is always printed. It does not apply to you in this case. 

Hervé CACI posted on Monday, March 16, 2015  4:32 am



Dear all, I came across an error message while conducting measurement invariance testing. I was testing a bifactor model for variances invariance between two age groups. After successfully testing for uniqueness invariance, I discarded the following line the 2nd group: Inatt*; Hyper*; Imp*; g*; ! 3 specific factors and 1 gfactor This is the only modification I made. Any idea? THE MODEL ESTIMATION TERMINATED NORMALLY THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE COMPUTED. THE MODEL MAY NOT BE IDENTIFIED. CHECK YOUR MODEL. PROBLEM INVOLVING THE FOLLOWING PARAMETER: Parameter 73, Group YOUNG: G BY I10 (equality/label) Where the 'YOUNG' group is the first group... Thank you in advance. Hervé 


Please send the output and your license number to support@statmodel.com. 


Hello again. I ma still, having some problems with Muli Group invariance testing. I am testing a multicountry (6) model and all goes fine but in one country. There I have that 2 latent variables have a correlation higher than 1. That means they are seen as same in that specific country but not in all others. Is there a way I can run a model with one of the LV is dropped JUST for that country (group)? In alternative, I am not sure it is acceptable (and will work) to constrain the two LV to have a variance just below 1 (0.99) by e.g. F1 WITH F2@0.99. Thanks Raffaele 


Sounds like that group is better suited for one factor which means it has nothing invariant with the other groups so it should not be part of the multigroup analysis. I would not constrain the factor covariance that way. If other countries also have high factor correlations, you might consider ESEM or BSEM multiplegroup analysis to allow crossloadings which tend to reduce factor correlations (see our website). 


Hello all, I am following up on the discussion on how to calculate the NCI based on mplus output. I am trying to use the formula provided by Marlies, and would like to ask for help regarding the calculation of exp... What values am i supposed to enter in the website that was recommended? I am sorry if did is not a question suitable for this mplus discussion, but I found no where else to ask... would really appreciate any help! 


I think this question is more appropriate for a general discussion forum like SEMNET. 

Tahir Wani posted on Friday, August 07, 2015  3:12 am



I got the following MODEL FIT INFORMATION while testing invariance, Invariance Testing Number of Degrees of Model Parameters Chisquare Freedom Pvalue Configural 392 2604.345 1958 0.0000 Metric 356 2664.537 1994 0.0000 Scalar 320 2732.325 2030 0.0000 Degrees of Models Compared Chisquare Freedom Pvalue Metric against Configural 61.251 36 0.0054 Scalar against Configural 129.983 72 0.0000 Scalar against Metric 68.576 36 0.0009 Is it right to assume that there is no Invariance across groups in this case. In other words is it right to assume that when the p value is significant there is no invariance? 


Yes. Although, more specifically, there is not full invariance. There could still be partial invariance (for all but a few items). 

milan lee posted on Friday, August 07, 2015  8:15 pm



Hello Dr. Muthen, When testing the partial measurement invariance of factor loadings across three groups, I found that it was achieved after I freed up some loadings for each of the three groups. Specifically, 3 indicators were relaxed in the first group, 2 in the second group and 3 in the third group. Moreover, these freedup indicators are not identical across groups. Can I still say that the partial metric invariance holds? Thank you! 


It is equalities across groups in say loading parameters that you should be relaxing. So I don't understand your statement that the they are "not identical across groups". Unless you mean that some parameters are unequal across 2 of the 3 groups and some other unequal across 2 other groups. 

Tahir Wani posted on Saturday, August 08, 2015  11:17 am



Dr Muthen, After the invariance measurement I want to do a multi group analysis specifically find out group differences across all paths, does MPlus provide with a mean difference with any specific input command or does it need to be computed manually by chi square difference. 

milan lee posted on Saturday, August 08, 2015  1:49 pm



Thank you, Dr. Muthen! Yes, I meant that. So my questionnaire has 15 items and the partial metric invariance across 3 groups held when: 4 items/indicators were freed only in group2 (resulting in their loadings in this group to be different from the other two groups) and another 2 items were freed only in group3. Most relevant literature considers that partial invariance holds as long as loadings of 2 items remain equivalent. But are there any criteria to rule partial metric invariance under the circumstance I encountered as above? Again, thank you very much! 


Answer to milan lee: The statistical rule is that you want an identified model; the output will tell you if it isn't. The substantive rule is that you want to have sufficiently many equalities that you believe you are measuring the same thing. 


Answer to Tahir Wani: You need to test it yourself; either by chi2 diffs or by Model Test. 


Hello, I'm testing measurement invariance across gender groups, and statistical fit (CFI, RSMEA) is better in constrained models. I haven't seen that before, so I don't know if it's ok. If the difference (+) is bigger than cutoff criteria, should I worry? Or is ok if the constrained tests fit better, no matter how much. Thank you in advance, Manuel. 


Please send the outputs and your license number to support@statmodel.com. 


A warning appears in my output file: "MODINDICES option is not available when performing measurement invariance testing with multiple models with the MODEL option of the ANALYSIS command". I don't understand it because I've used modindices command performing measurement invariance and it worked. My inp is: VARIABLE: Names are v6 v394 v395 v396 v397 v398 v399; usevariables are V394 V395 V396 V397 V398 V399; Grouping is V6 (2=Belgium 3=Netherlands 4=Germany_West 5=Italy 6=Luxembourg 7=Denmark 8=Ireland 9=Great_Britain 10=Northern_Ireland 11=Greece 12=Spain 13=Portugal 14=Germany_East 16=Finland 17=Sweden 18=Austria 19=Cyprus 20=Czech_Republic 21=Estonia 22=Hungary 23=Latvia 24=Lithuania 25=Malta 26=Poland 27=Slovakia 28=Slovenia 29=Bulgaria 30=Romania 31=Turkey); ANALYSIS: model = configural metric scalar; MODEL: FOR by V396 V397 V398; AG by V394* V395 V399; AG@1; OUTPUT: stand TECH9; modindices; 


They are not available when you test for measurement invariance using the following options: model = configural metric scalar; 

Tahir Wani posted on Monday, October 12, 2015  6:26 am



Dear Dr Muthen, I found that my dataset was not normal so used MLR estimator. I check the model for invariance and found it invariant at configular, metric and scalar levels for all the 5 demographic variables I had used. Now I ran the model e.g for Gender(male and female) and got the measurement and structural paths for both males and females. I am now confused that should I report these paths and show how they differ for the other group or do I have to do chisquare difference test for the paths. If so can we do that for MLR and please if you can guide how to do it. I am familiar with ML and AMOS to perform it but since I am new to MPlus have got no clue. Thanks 


If you want to compare the paths across groups, you can do chisquare difference testing or use the Wald test of MODEL TEST. How to do difference testing using MLR is described on the webiste. See How To in the left column. 

Masa Vidmar posted on Monday, October 12, 2015  2:12 pm



I am running a multiplegroup CFA with two factors. I did a weak invariance test across two groups (langugaes). I did not find support for the weak invariance (only configural). Now I would like to test partial invariance by only constraining one loading at the time  I would like to examine if all indicators are a problem or possibly just one. Is that ok to do? Also, I do not know how to write an input for this (for only constraining one loading). Can you help me? Below is the extract from the input for testing weak invariance: model SLO: pips_lit by Writing @1; by iar (1) by letters_p (2) word read_p (3); pips_mat by sums@1; by numbers (4); by math (5); [Writing] (100); [iar]; [letters_p]; [word] ; [read_p]; [sums] (200); [numbers] ; [math]; model GER: pips_lit by Writing @1; by iar (10) by letters_p (20) word read_p (30); pips_mat by sums@1; by numbers (40); by math (50); [Writing] (100); [iar]; [letters_p]; [word] ; [read_p]; [sums] (200); [numbers] ; [math]; 


To add an invariant loading, say e.g. BY iar (1); in Model GER. 

Tahir Wani posted on Tuesday, October 13, 2015  3:31 am



Dear Sir, Yes exactly I want to compare the paths across groups, but the thing is that I haven`t hypothesised to check the difference on any particular path, in other words I want to perform it on all the structural paths available. So I am not sure what a constrained or nested model here means and which one will be a baseline model. If you can please explain or help me with this matter. Regards 


You can have all paths free and then constrain one to be equal at a time. Or you constraint them all to be equal. This choice must be yours. 


Hello, I am trying to test measurement invariance across several cultural groups (about 11 countries). I watched the topic 1 video and see that most of the discussion is centered on testing invariance for two groups (male/female). Are there any issues with testing invariance on several groups and if not, do you recommend using MIMIC or multigroup CFA? Thank you. 


Several methods for many groups are shown at our Measurement Invariance page: http://www.statmodel.com/MeasurementInvariance.shtml 

Jinni Su posted on Friday, October 30, 2015  4:08 pm



Dear Dr. Muthen, My colleagues and I ran multigroup CFA models to evaluate measurement invariance across race (European American vs African American). And reviewers commented that we may have established invariance across both race and income/SES given the possible confound between race and income/SES. Do you have any advice on how to tackle this issue? Is there a way to evaluate measurement invariance across race while controlling for SES? Thanks, Jinni 


You may want to raise this question on SEMNET. 

Ali posted on Wednesday, February 03, 2016  12:17 pm



I have a few of questions about CFA and measurement invariance across 11 groups. First, I run onefactor model with four nominal indicators 11 times, but the output didn't have the values of CFI, TLE,RMSEA. Is it because of nominal indicators? Second question is that how can I do measurement invariance for nominal variables? I check the MODEL = CONFIGURAL METRIC SCALAR, however, it can't work for the nominal variable. 


Q1. Yes. Try TECH10. Q2. You would have to specify invariance yourself. 

Ali posted on Thursday, February 04, 2016  6:48 am



I have tried "TECH 10" in the output command, but it still not showed CFI,TLE,and RMSEA. As for testing measurement invariance on nominal indicators, I am not sure if my codes are specified correctly in the model command. Also, I run the codes, but it showed me"*** ERROR Group AUS has 0 observations. *** ERROR Group CAN has 0 observations. *** ERROR Group GBR has 0 observations." VARIABLE:NAMES ARE CNT u1u4; USEVARIABLES ARE u1u4; GROUPING IS CNT (1=HK 2=JPN 3=KOR 4=QCN 5=SGP 6=TAP 7=AUS 8=CAN 9=GBR 10=NEZ 11=USA) NOMINAL are u1u4; MISSING ARE ALL (79); ANALYSIS: ESTIMATOR=MLR; MODEL: f1 BY u1u4 ; MODEL JPN: f1 BY u1u4 ; MODEL KOR: f1 BY u1u4 ; MODEL QCN: f1 BY u1u4 ; MODEL SGP: f1 BY u1u4 ; MODEL TAP: f1 BY u1u4 ; MODEL AUS: f1 BY u1u4 ; MODEL CAN: f1 BY u1u4 ; MODEL GBR: f1 BY u1u4 ; MODEL NEZ: f1 BY u1u4 ; MODEL USA: f1 BY u1u4 ; 


Please send the output, data set, and your license number to support@statmodel.com. 


Hello, I am conducting a 2group measurement invariance (MI) testing on a model with three latent factors and 13 continuous indicators. I am using MLR as estimator because of nonnormality. I have used the MODEL IS CONFIGURAL METRIC SCALAR command and at least partial scalar MI is supported. Now I would like to test strict MI. Hence these questions: 1. Could strict MI be tested using MODEL IS SCALAR command and add restrictions for invariant variance across groups? 2. If yes, do I just add e.g. y1(1);y2(2)... under MODEL to keep variances equal across groups? 3. Is the Chisquare test provided with the MODEL IS CONFIGURAL METRIC SCALAR command already corrected for the MLR estimator? Or do I calculate the SB scaled chisquare by hand? My interpretation of the user guide is that the scaling correction is carried out automatically. Thank you! 


1. I do not think so but you can try. 2. Yes. These are residual variances in the factor model. 3. Yes. 

Paula Vagos posted on Wednesday, March 30, 2016  2:41 am



Dear Doctor Muthens, I am testing measurement invariance of a selfreport instrument that uses a fivepoint likert type scale. The data is not multivariate normal, so I am using the MLR estimator. I tested configural, then metric and then scalar invariance, but I got the feedback that because I am using ordinal variables, I should only test for configural and scalar invariance. I tried reading about this (for example, Milsap, 2004 at http://ibg.colorado.edu/cdrom2012/boomsma/FactorAnalysis/PracticalMeasurementInvarianceOrdinal/Literature/Millsap2004.pdf) but still find it confusing. I was wondering if you had an opinion on this and/or could point me to some relevant readings on this. Thank you! 


It is a bit confusing. Millsap's book on measurement invariance describes this. To quote an email from him: "The invariance constraints on the thresholds in my book on 5.19 are meant to apply to the configural case. Once metric invariance is imposed, one can actually release some constraints on the thresholds and still achieve identification." For this reason it may be safer to only test for configural and scalar invariance. 

Paula Vagos posted on Thursday, March 31, 2016  2:40 am



Thank Doctor Bengt, It is actually very confusing for me... I would dare ask you a follow up question. What I should do, then, is: 1. Test for configural invariance with all thresholds constraint to be equal and loadings free 2. Test for scalar invariance with all loadings and thresholds constraint to be equal 3. Free one threshold at a time in trying to achieve a nonsignificant chisquare difference? Again, thank you for any help you might give me on this subject! 


I would use our automatic Model = configural scalar approaches discussed in our UG. For scalar you can then go back and free parameters that have large Modindices. 


Thank you again Dr. Bengt, I wasn't aware of this update. Still, I had understood that Millsap had suggested constraining the thresholds/ intercepts when testing for configural invariance, whereas configural invariance as calculated using the MODEL = configural scalar option does not apply this constraint. So, if I am understanding correctly, you are suggesting comparing the completely free model (i.e., configural) with the scalar model, disregarding an eventual threshold constraint at the configural level? Thank you for your attention and help! 


Yes. 

Pia H. posted on Monday, April 04, 2016  6:52 am



Hello, I just did invariance measurement using the MODEL = configural scalar; PARAMETERIZATION = theta; commands (I cannot check for metric MI because I have variables loading onto more than one factor). The output gives me nonsignificant comparison for scalar against configural, which should be good. But: the CFI values slightly improve from the configural (.942) to the scalar (.943) model. Are they not supposed to detoriate with adding more constraints? Thank you very much, Pia 


No absolute values of fit statistics can be compared using WLSMV. Chisquare difference testing is done using the DIFFTEST option. 

Pia H. posted on Tuesday, April 05, 2016  3:32 am



Dear Dr Muthén thank you for the quick reply. If have a followup question: Is it possible to change the estimator or do I have to use the DIFFTEST manually? If the latter applies, can I have mPlus estimate the configural and the scalar model and then compare them using the DIFFTEST oder do I have to add the constraints in the manuscript manually and then use the DIFFTEST command? Thank you very much, Pia 


Not sure what you are asking. You can't do DIFFTEST manually  too complex. You can estimate the model with ML and consider likelihoodratio chisquare difference testing. But you won't get CFI with ML when you have categorical outcome which I think was your interest. 

Pia H. posted on Tuesday, April 05, 2016  6:44 am



Dear Dr Muthén I might have put too much emphasis on the CFIs in my first post. In fact, I only need to know if my model has configural and scalar measurement invariance. Can I estimate two models using the model = configural command for the first model and model = scalar for the second one and then compare the two based on the chi square significance using the DIFFTEST command? Thanks in advance, Pia 


Yes. 


Dear Sir or Madam, I want to test the measurement equivalence of my factor (HIVOEXT) over two groups (1 = patients, 2 = others). Are these syntax correct to estimate the configural, metric and scalar invariance? Thank you very much in advance CONFIGURAL INVARIANCE: USEVARIABLES ARE Q7_1 Q7_4 Q7_9 Q7_10 ; grouping is PatientCare (1 = pati 2 = others); ANALYSIS: ESTIMATOR is MLR; MODEL: HIVOEXT by Q7_1 Q7_4 Q7_9 Q7_10 ; MODEL OTHERS: HIVOEXT by !Q7_1 Q7_4 Q7_9 Q7_10; [Q7_1 Q7_4 Q7_9 Q7_10]; [HIVOEXT @0]; METRIC INVARIANCE USEVARIABLES ARE Q7_1 Q7_4 Q7_9 Q7_10; grouping is PatientCare (1 = pati 2 = others); ANALYSIS: ESTIMATOR is MLR; MODEL: HIVOEXT by Q7_1 Q7_4 Q7_9 Q7_10; MODEL OTHERS: !HIVOEXT by !Q7_1 !Q7_4 !Q7_9 !Q7_10; [Q7_1 Q7_4 Q7_9 Q7_10]; [HIVOEXT@0]; SCALAR INVARIANCE: USEVARIABLES ARE Q7_1 Q7_4 Q7_9 Q7_10; grouping is PatientCare (1 = pati 2 = others); ANALYSIS: ESTIMATOR is MLR; MODEL: HIVOEXT by Q7_1 Q7_4 Q7_9 Q7_10; 


The models to test for measurement invariance for continuous outcomes are shown in the Topic 1 course handout on the website. For categorical outcomes, see the Topic 2 course handout. Both are under the topic multiple group analysis. Chapter 14 describes the models for other situations. 


Dear Prof. Muthen, Thank you for your quick reply. I adjusted the syntax accordingly. Adding [HIVOEXT @0]; in the first part of the MODEL command for both configural and metric invariance, results in latent means fixed at zero for both groups. To test scalar invariance, I did not specify [HIVOEXT @0], causing the latent means to differ between groups. This, however, feels counterintuitive, as it seems I am freeing the means to obtain scalar invariance – whereas to my understanding the latent means should be fixed in this last model in order to obtain scalar invariance. I am wondering whether I am mistaken. Should I only specify [HIVOEXT @0] in the scalar model and leave this specification out of the configural and metric model. Thank very much in advance 


The factor means should be allowed to be different in the groups for the scalar model. It is still a much more restrictive model than metric and configural because you hold the intercepts equal across groups. 


Dear Prof Muthen, Thank you for your answer. Consequently, should I allow the latent factor means to be different in the configure and metric model as well? Thank you very much in advance 


No, that would not be identified. Our UG shows how to set up these invariance models. 


Hi Dr. Muthen, If conducting invariance testing using dichotomous variables over three time points, do you know any causes of category problems for variables? For example, if one of ten variables (at three time points) appears to have 3 categories, instead of two (as follows), what could it mean? HAEIQ56R Category 1 0.282 292.000 Category 2 0.653 676.000 Category 3 0.066 68.000 IAEIQ55R Category 1 0.361 387.000 Category 2 0.580 622.000 Category 3 0.060 64.000 The top category values are the problem. This is the same item at two waves. I have run descriptives in SPSS and they are dichotomous there. Any suggestions? Thanks! Hillary 


You probably have blanks in your data set which is not allowed with free format or the number of variable names in the NAMES statement is not the same as the number of columns in the data set. 


Thank you for your suggestions! Just to be certain I understand correctly, in your first response, you are suggesting removing all blank lines of data (subjects who have no data) from the data set? Hillary 


No, I mean that for some variables, the entry may be a blank. SPSS uses blanks for missing values in some cases. This is not allowed in Mplus with free format data. 


I see! Thank you! A format statement is present so I do not have free format data. 


Check that is correct and the number of variables is correct. If you can't see the problem, send the files and your license number to support@statmodel.com. 


Fixed it! Thank you so much for all your help! Hillary 

Margarita posted on Friday, September 09, 2016  11:47 am



Dear Dr. Muthen, I am testing for configural vs. scalar invariance for a longitudinal model with 3time points. I had a couple of questions, if you have the time. I also posted this to SEMNET as I was not sure if it was appropriate for this forum. If not, please ignore my post. After freeing some of the thresholds based on MI, the chisquare difference is still significant. I was wondering 1) does the input look okay? 2) After consulting several examples I am not clear as to how one can check whether the factor loadings are invariant across groups? Should I compare the MI in the "By" section and free those that are different in one of the groups? ANALYSIS: ESTIMATOR = WLSMV; PARAMETERIZATION = THETA; MODEL =CONFIGURAL SCALAR; MODEL: E1 by S3_T1 S8_T1 S13_T1 S16_T1 S24_T1; E2 by S3_T2 S8_T2 S13_T2 S16_T2 S24_T2; E3 by S3_T3 S8_T3 S13_T3 S16_T3 S24_T3; C1 by S5_T1 S7_T1 S12_T1 S18_T1 S22_T1; C2 by S5_T2 S7_T2 S12_T2 S18_T2 s22_t2; C3 by S5_T3 S7_T3 S12_T3 S18_T3 S22_T3; S5_T2 WITH S13_T2; S24_T2 WITH S16_T2; MODEL FEMALE: S16_T1 with S24_T1; [ S3_T1$1*]; [ S13_T2$1*]; [ S16_T2$1*]; [ S7_T2$1*]; [ S18_T2$1*]; Thank you! 


If you have binary outcomes you should not test for metric invariance. See our description of invariance testing in the UG. 

Margarita posted on Friday, September 09, 2016  1:55 pm



Thank you for your prompt reply. I apologise if my post was not clear. The indicators of my model have 3 categories (2 thresholds). 


You can impose the metric model described on page 486 of the V7 UG and then look for loadings with large MIs. 

Margarita posted on Monday, September 12, 2016  4:31 am



Thank you for your reply. I have one last question, if you have the time. I was reading in the UG that factor loadings and thresholds need to be freed in tandem. Given that in metric invariance with theta parameterization only the 1st and 2nd indicator of each factor are held equal across groups, then I should free factor loadings with large MI but only thresholds that correspond to those that are held equal across groups, correct? e.g. F1 by item1 item2 item3 item4; (assuming that item1 and item 2 are held equal) Group 2: F1 by item1* item4*; [item1$1 $2]; item1@1; Thank you! 


That's a reasonable approach. 


I am attempting to test a measurement invariance ESEM model of personality disorder symptoms (80 items responded on a 5point scale) between treatment and nontreatment seeking groups. I am using the short cut method for testing configural and scalar invariance, where I specify: MODEL IS CONFIGURAL SCALAR; MPlus does not test for METRIC invariance with categorical outcomes. In order to do this, I am following the example provided in 5.27 (p. 100) of the UG. However, I'm having trouble relaxing the default equality constraint on the item thresholds. The model runs fine when I relax the first threshold of every item (e.g.,[Y1$1Y80$1]). But when I do this for all 4 thresholds, I get the following message: THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE COMPUTED. THE MODEL MAY NOT BE IDENTIFIED. CHECK YOUR MODEL. PROBLEM INVOLVING THE FOLLOWING PARAMETER: Parameter 1547, Group G2: F3 BY COBC4 THE CONDITION NUMBER IS 0.476D15. Is there an easier, alternative method for testing for metric invariance? If not, what is the correct syntax for relaxing the equality constraint on the item thresholds? Would it be, for example: [Y1$1Y80$1]; [Y1$2Y80$2]; [Y1$3Y80$3]; [Y1$4Y80$4]; Any help would be appreciated. Thank you. 


Please send the output and your license number to support@statmodel.com. 

yvette xie posted on Wednesday, October 12, 2016  12:47 am



Dear Professor, I'm doing a study about parental behavior using PBI scale. I want to demonstrate the scale works equally across mother and father. But the scores of mothers' and fathers' parenting behaviors were obtained from the same children instead of two groups. In this case, can I use multigroup CFA to conduct measurement invariance test by specifying mother and father as two separate groups? Thanks in advance! Best Regards, Yvette 


You can test measurement invariance in a single group by letting mother and father variables be arranged in a wide format  just as if it was two time points. 

yvette xie posted on Wednesday, October 12, 2016  6:12 pm



Thank you so much! Best, Yvette 


Hi, I'm testing invariance using the MLR estimator. I'd like to use the McDonald's noncentrality index to compare models, which I am able to calculate using the formula posted by marlies (on Tuesday, October 29, 2013  9:48 am). It would seem that I should introduce the scaling factor into that calculation, since the comparison of two scaled chi squares does not have a chi square distribution, but I cannot find documentation for how to do that. I know that mplus does not provide this index and therefore may have no opinion, but I'm grateful for any input. Thanks so much, as always! 


Since this is a fit index I would recommend not correcting it further. The MLR chisquare has the scaling correction already in it. If you decide to pursue this anyway I would suggest using Exp(x) ~ 1 + x (which makes it linear and then you can use the usual methods for difference of chisquares). 


Dear Mplus team, I am testing invariance of all parameters (successively) in a CFA model between eight groups. When constraining a particular subset of factor loadings to invariance, I get the following message: NO CONVERGENCE. SERIOUS PROBLEMS IN ITERATIONS. ESTIMATED COVARIANCE MATRIX NONINVERTIBLE. CHECK YOUR STARTING VALUES. What is puzzling me is that every other nested model, both less constrained and more constrained (25 models in total) converges just fine. I have already tried using the estimates from the closest less restricted model as starting values  to no avail. What can I do? Thanks so much! Best, Heiko 


Please send the output and your license number to support@statmodel.com. 


Dear Linda, For my PhD research, I am conducting a crosscultural research in which I am studied the gambling behaviours in two different countries. I want to test the measurement invariance of my measures between these two countries before starting to conduct cross national comparisons. My dependent variable is categorical and my independent variables are continuous. I am thinking to use the short cut method for testing configural and scalar invariance, where I specify MODEL IS CONFIGURAL SCALAR (as my outcome variable is binary and categorical). Could you please tell me if I can use this shortcut with all measures at the same time, or should I test the measurement invariance for each measure? Many thanks Filipa 


These are meant for testing measurement invariance of factors not observed variables. 


Many thanks for your reply. I know that this shortcut method is used for testing measurement invariance of factors, and not observed variables. But in my case I need to test the measurement invariance in more than one factor, in one factor that is my outcome variable (composed by binary observed indicators) and in other two factors that are my independent variables. Therefore, could you please tell me if I need to write this shortcut separately for each of my factors, that is, to write this short cut firstly for my dependent variable and then write again this shortcut for each of my independent variables, or can I write this short cut for all my factors (that constitute all my variables) at the same time? Many thanks 


You can do all factors at the same time. 

Artur posted on Sunday, December 11, 2016  3:05 am



Dear Mplus Team, in BSEM multigroup invariance analyse to specify approximate invariance across all items and all groups the priors specification looks like that (example from Mplus Web Note 17 table 12, 6 items 10 groups): MODEL PRIORS: DO(1,6) DIFF(lam1 #lam10 #)~N(0,0.10); DO(1,6) DIFF(nu1 #nu10 #)~N(0,0.10); The question is how to specific the Partial BSEM (situation 4 from Web Note 17: Freeing noninvariants, BSEM V=0.10 for others, Table 8). For instance how to free the factor loading for item 1 in group 1 (and not for other groups) and item 2 in group 2 and let say item 4 in group 5? I could not find examples of such analysis on the website nor in manual. Thanks in advance. 


You just skip the loading in question when you do the labeling in the Model command. So instead of saying for a certain class x f by y1y3* (lamx_1lamx_3); you don't label say the y1 loading and say f by y1* y2y3* (lamx_2lamx_3); and then refer to only those two labels in Model Priors. 


Hello 1 I have seen two different formulas for calculating McDonald's noncentrality index. One uses N and the other one used N1. Which one is more appropriate with the mplus X squared estimate? 2 Can I use a X squared produced by MLR for calculating McDonald's noncentrality index? or I should only use a X squared produced by ML? Thank you so much in advance. 


I am not familiar with this index. You may want to ask on SEMNET. 


Dear Linda and Bengt, I am attempting to run a MGCFA to test for measurement invariance between males and females. My indicator variables are categorical. My syntax for the configural model is this: GROUPING = m1 (1=BOYS, 2=GIRLS); ANALYSIS: ESTIMATOR IS WLSMV; MODEL: PHYAB BY vip27_r* vip28_r vip29_r vip30_r; PHYAB@1 ; SEXAB BY vip31_r* vip32_r vip33_r vip34_r; SEXAB@1; MODEL GIRLS: PHYAB BY vip27_r vip28_r vip29_r vip30_r; [vip27_r vip28_r vip29_r vip30_r]; [PHYAB@0]; SEXAB BY vip31_r vip32_r vip33_r vip34_r; vip31_r vip32_r vip33_r vip34_r]; [SEXAB@0]; However, I always get the following error message The following MODEL statements are ignored: * Statements in Group GIRLS: [ VIP27_R ] [ VIP28_R ] [ VIP29_R ] [ VIP30_R ] [ VIP31_R ] [ VIP32_R ] [ VIP33_R ] [ VIP34_R ] The model runs for scalar invariance if I remove the [vip...] statements but I would like to test for both configural and metric invariance. What might be my problem? Thanks for your help! 


With categorical variables you need to use $: [y$...] 


Thanks so much! That worked. Unfortunately, I now get an error message that tells me that one of my groups has 0 observations. I have checked the dataset and this isn't the case. I also get a warning message that I have five times the amount of missing values that I actually have. 


Please send the files and your license number to support#statmodel.com. 


Just to confirm, the default settings in Mplus for the measurement model in multigroup analysis correspond to the scalar model, is that correct? So for observed categorical dependent variables using the default Delta parameterization, this would constrain factor loadings and thresholds to be equal across groups? Also, I read elsewhere in the discussion group that absolute fit indices cannot be compared for the WLSMV estimator. So does that mean apart from the difftest, it is not appropriate to say that the value of the CFI/TLI/RMSEA/SRMR is slightly better or improved in one model versus another (i.e., based on the numeric estimate and whether it is higher or lower in one model compared to another)? 


Yes on both. 


Thanks. Could you point me to a reference I can cite on why absolute fit indices shouldn't be compared for WLSMV? 


I can't think of one. 


Dear Bengt and Linda, In the Mplus User's Guide Version 7 it says: The metric model is not allowed for ordered categorical (ordinal) variables when a factor indicator loads on more than one factor, when the metric of a factor is set by fixing a factor variance to one, and when Exploratory Structural Equation Modeling (ESEM) is used. p. 486 Why can's we assess the measurement invariance metric model for a measurement model with ordinal indicator variables and crossloading items? What conclusions can we draw about measurement models with crossloading items? Best, Leonhard 


We don't really recommend the metric model for ordinal variables but instead recommend going straight to the scalar model  and then you can consider invariance also for crossloadings. For more on the ordinal case, see Roger Millsap's measurement book which goes through various cases. 

Ashley posted on Saturday, April 15, 2017  9:16 pm



Hello, I'm trying to test measurement invariance of a multiple group confirmatory factor analysis clustered by community. I am looking at whether my model varies across different groups (e.g., urban, rural). My model is set up as follows: data: file = missimplist.dat; type = IMPUTATION; variable: names = usevariables = categorical = cluster = COMM; missing = ALL (999); Grouping IS RURAL (1= rural 2= urban); analysis: TYPE = COMPLEX; ESTIMATOR = WLSMV; model: F1 BY V1....V5; F2 BY V6...V9; F3 BY V10...V15; I'm having trouble setting up the next steps (MODEL CONSTRAINT: and MODEL TEST: ). Any insights or resources you might be able to share to help me set this up would be very much appreciated! 


Testing for measurement invariance across groups for continuous items is shown in the Topic 1 course video and handout under multiple group analysis. For categorical items, see the Topic 2 course video and handout. 

Ashley posted on Saturday, May 20, 2017  9:59 am



I have reviewed the video and handout and found them to be very helpful for setting up my model; however, especially given that I'm working with imputed datasets, I'm confused as to how I can interpret the findings. More specifically, how do I know if the invariance is significant or if my model needs to be adjusted? Thank you in advance. 


Invariance testing involves chisquare difference testing and this has not been developed for Data Type=Imputation. See slides 170217 of our 6/1/11 handout for Topic 9 on our website. For invariance testing we therefore recommend not using Data Type=Imputation but instead handle missing data using e.g. MLR. 

Ashley posted on Sunday, May 21, 2017  1:27 pm



Thank you. Is there any type of similar tests that I could do in Mplus using Type=Imputation? If possible, I want to test my model with the imputed datasets given that all other analyses use imputed datasets. Thank you, Ashley 


Dear Mplus team, I am currently testing the measurement invariance of a scale in two different samples. However, when I checked the measurement model of this scale in the total sample,(before starting to examine its measurement invariance between the two samples), I added two correlations, which correspond to two correlations between two factor indicators in order to increase the model fit. Therefore, could you please tell me how can I insert these two correlations in the syntax when checking the invariance of this scale between these two samples? Should I write the following syntax: Analysis: MODEL IS CONFIGURAL, METRIC, SCALAR Model: SK by Senk1 Senk2 Senk3 Senk4 Senk6 Senk7 Senk8 Senk2 WITH Senk1; Senk7 WITH Senk1; Or should I fix these correlations to O? What do you suggest me to do? Many thanks for all your help, 


I would recommend not including the residual covariances until later if needed because the allowance of noninvariance may change the need for them. 


Ashley  you can use Model Test to test that measurement parameters are equal. 

Ashley posted on Saturday, June 10, 2017  11:14 am



Hello, I ran my cfa's separately for each group (urban/rural). I now would like to see if the factor loading for each group correlate (e.g., does loading 1 of factor 1 of the urban analysis correlate with loading 1 of factor 1 of the rural analysis). Is there a code to do this in mplus? Thank you, Ashley 


By "correlate" perhaps you mean the estimated correlation between the two parameter estimates. If so, you find that in TECH3. But perhaps instead you mean how close factor loadings are in the two groups for which one can correlate them  this is not done automatically in Mplus. 

Ashley posted on Saturday, June 10, 2017  12:34 pm



Thank you. I apologize for the lack of clarity! I am wanting to do the second one  see how close factor loading are in the to groups. Is there a code I can use to manually do this in Mplus? 

Ashley posted on Monday, June 12, 2017  9:18 pm



Hello, I would like to follow up to inquire if the analysis described in the question above is possible. Is there a code I can use to manually correlate factor loading a separate groups in Mplus? Thank you! 


No. You would instead save the results and then do a correlational analysis (in any program). 

Ashley posted on Tuesday, June 13, 2017  7:43 pm



Thank you. I'm getting the following error: *** WARNING in SAVEDATA command The FILE option is not available for TYPE=MONTECARLO or TYPE=IMPUTATION. The FILE option will be ignored. Is it possible to save results if the analyses are run on an imputed dataset? Thank you! 

Ashley posted on Friday, June 16, 2017  10:04 pm



I would like to follow up regarding the question above is it possible to save results if analyses are conducted on imputed datasets? 


I'm testing configural invariance across two groups using input from the Topic 1 course handout (pg. 212). However, the code will not run. Below is the input and error. Can you tell me what I am doing wrong? usevariables are univedu workpay sellprop finanind wkouthm decwkout decmoney dleavehm dfoodeat dwrkpreg drestprg fomhosp fommovie fomrest fomcoffe fommall fomfriend fomparks; grouping is nationality (1=QT 2=NQ); Missing are all (9999) ; Model: handr by univeduwkouthm; [handr@0]; decision by decwkoutdrestprg; [decision@0]; fom by fomhospfomparks; [fom@0]; Model NQ: [workpayfomparks]; output: standardized modindices(all 0); *** ERROR The following MODEL statements are ignored: * Statements in Group NQ: [ WORKPAY ] [ SELLPROP ] [ FINANIND ] [ WKOUTHM ] [ DECWKOUT ] [ DECMONEY ] [ DLEAVEHM ] [ DFOODEAT ] [ DWRKPREG ] [ DRESTPRG ] [ FOMHOSP ] [ FOMMOVIE ] [ FOMREST ] [ FOMCOFFE ] [ FOMMALL ] [ FOMFRIEND ] [ FOMPARKS ] 


Please send your output to Support along with your license number. 


I am testing measurement invariance with a three factor model using categorical items. I tested the metric invariance model and have identified a model with partial metric invariance. However, when I remove the lines of code that overwrite the intercept invariance default and use the difftest command to test this I get an error message that the models are not nested. All I have done is remove these lines of code  otherwise the models are the same. I also tried it the other way around and that produces the same error. Can you tell me what I am doing wrong? 


Please send the two outputs and your license number to support@statmodel.com. 

Lois Downey posted on Wednesday, October 04, 2017  4:48 pm



I frequently encounter cases where, if I run a singlefactor CFA, the test of fit shows a statistically significant chisquare, thus suggesting that the indicators are not unidimensional. However, if I do a 2group model with the same sample, specifying the same factor structure, and imposing the default betweengroup invariance, the chisquare test is nonsignificant. Is this a common occurrence, or does it suggest that one or both of my models are misspecified? If it is a common occurrence, what is the explanation  in easytounderstand terms? Thanks! 


I'm not clear on whether the run of your first paragraph is on the total sample (putting the 2 groups together), or if it is 2 analyses, one for each group. If it is the latter, the outcome is strange. You could send an example to Support. 

Lois Downey posted on Wednesday, October 04, 2017  6:35 pm



The run noted in my first paragraph is on the total sample (combining the 2 groups). 

Lois Downey posted on Thursday, October 05, 2017  7:30 am



However, I've now also run the model for two separate groups, and it is still the case that I get significant misfit for the separate groups, but nonsignificant misfit for the 2group model. I will send an example to Support, as you have suggested. Thanks! 

Joao Garcez posted on Sunday, November 19, 2017  4:28 am



Dear Drs Linda & Bengt Muthen, Good morning. I'm testing the longitudinal invariance of a measure but since n>2800 at both T1 and T2 I am concerned the chisquare test of difference will be impacted so as to provide significant results irrespective of actual invariance (Kang et al., 2015). I considered using McDonald's NCI formula to compare configural, metric and scalar models and bypass the influence of sample size. However, when I used the chisquare model fit and CFI with WLSMV, I get a better fit for scalar than for the configural model, which if I understood correctly is something that should not be happening. In previous threads you warned that when resorting to WLSMV, the values of the chisquare model fit/CFI should not be used and only the chisquare of difference should be considered. Hence: 1  Would it be correct to assume that the values of CFI and chisquare model of fit as calculated via WLSMV cannot be used for the GFI comparisons suggested by Kang et al. (2015)? Is there a way to use the GFI's comparisons with WLSMV estimates? 2  Is there an alternative that you'd suggest that still accounts for group size? 3  In your guide you suggest freeing thresholds/loadings in tandem when doing Partial MI. Does this mean I should also constrain in tandem and skip metric model invariance and just do configural vs scalar? Thank in advance for any help you can provide, Best 


I would look at modification indices for the scalar model and see which parameters need to be noninvariant. If the noninvariance is substantively small I would ignore the misfit judged by chisquare because it can be deemed "oversensitive" due to a large sample (but N=2800 isn't that large with categorical outcomes). I don't think GFI can be done using WLSMV. You may also want to ask on SEMNET. 

Joao Garcez posted on Sunday, November 19, 2017  2:15 pm



Dear Dr. Bengt Muthen, Thank you for your reply, I really appreciate it. If I may ask, in your opinion is there a standard that I should be considering as "substantively small noninvariance"? Furthermore, is the answer to question 3 something that I should also enquire about in SEMNET? Thank you once again, Best. 


Q1: No, this depends on your field/application. Q2: I would just to scalar vs configural and skip metric. 

Joao Garcez posted on Monday, November 20, 2017  11:25 pm



Dear Dr. Muthen, Thank you. Best, Joao 

Louise Black posted on Thursday, November 23, 2017  3:49 am



Dear Drs Muthen, I am working with a bifactor model with 15 categorical and 4 continuous indicators the categorical items are from a scale that breaks into 2 residualised factors, while the continuous make up 1 further residualised specific factor. I am using WLSMV and now want to test for invariance, so I have a few questions if you have the time: 1. I assume I should use a fourstep (baseline, configural, metric, scalar) approach since continous items are involved, or should I skip metric as you suggest above? 2. Doing the fourstep I find metric but not scalar invariance and I am unclear how to proceed to test for partial MI here. I presume I should look at modification indices, but should I free loadings with thresholds in tandem as you suggest in your previous posts and UG but only intercepts for the continuous items, or just thresholds and intercepts individually? 3. If I should free loadings alongside thresholds (or intercepts), would I free the loadings on both the general and specific factors of the bifactor model? 4. Finally, would you be able to provide any additional insight into why thresholds are more related to the item probability curve than intercepts? Many thanks! Louise 


Settling for metric vs scalar invariance depends on what the model will be used for. If the use is to only compare say factor variances, metric is sufficient. But if the use is to compare factor means, scalar is needed. I would change both thresholds/intercepts and loadings. And for both general and specific factors. But these are general analysis strategies better discussed on SEMNET. 


Many thanks! 

Peter McEvoy posted on Wednesday, November 29, 2017  8:11 pm



Dear Drs Muthen, We are testing measurement invariance using MODEL = CONFIGURAL METRIC SCALAR for a simple single factor model at one timepoint across three groups (with different primary mental disorders). The output suggests no sig difference when comparing metric against configural, but scalar against configural and scalar against metric are both sig (ps .01 and .002 respectively). We now want to locate where exactly the invariance lies. We've requested "modindices(all)" in the OUTPUT line to help us identify sources of strain. However, we receive the following warning message, which suggests that we cannot use modification indices for this purpose using this model. "MODINDICES option is not available when performing measurement invariance testing with multiple models with the MODEL option of the ANALYSIS command. Request for MODINDICES is ignored." Can you please advise the next step? Do we need to write out the code without the MODEL = CONFIGURAL METRIC SCALAR code before we can move forward? If so, is the purpose of this MODEL command just to have an initial quick run to see if you need to go further with the full code to identify sources of strain? Thanks 


Request Modindices using the default input which is the scalar model (see examples in the UG). Yes, the purpose of Model= is to get a first overview. 


I am performing a CFA with 11 ordinal indicators with three continuous factors. Within the sample, I have two groups (Household Head and Women). I am trying to assess measurement invariance across the group by running a unconstrain and then a constraint model to perform the loglikelihood test to confirm the invariance.Due to ordinal variables, the default estimator of the CFA is WLSMV which does not produce the loglikelihood function. I tried to specify Estimator = MLR and receiving the following error: "ERROR in ANALYSIS command: ALGORITHM=INTEGRATION is not available for multiple group analysis. Try using the KNOWNCLASS option for TYPE=MIXTURE". Is there any way to use MLR estimate so that I can perform LogLikelihood test to confirm Measurement Invariance. VARIABLE: NAMES ARE modulename GM GPTN GMB CA CAD SSB SSL TRUSTL TRUSTN TRUSTS TRUSTAD SCR SCSB; USEVARIABLES = GMB CA CAD SSB SSL TRUSTL TRUSTN TRUSTS TRUSTAD SCR SCSB; CATEGORICAL = GMB CA CAD SSB SSL TRUSTL TRUSTN TRUSTS TRUSTAD SCR SCSB; Grouping is modulename (1 = HoH 2 = Women); Analysis: MODEL = NOMEANSTRUCTURE; INFORMATION = EXPECTED; MODEL: f1 BY GMB CA CAD; f2 BY SSB SSL; f3 BY TRUSTL TRUSTN TRUSTS TRUSTAD SCR SCSB; Model Women: f1 BY GMB CA CAD; f2 BY SSB SSL; f3 BY TRUSTL TRUSTN TRUSTS TRUSTAD SCR SCSB; 


For ML you need to use KNOWNCLASS. See the UG for how to do that. You can also use DIFFTEST for WLSMV. Again, see UG. 

Youhua Wei posted on Friday, May 11, 2018  8:42 am



Hi, I'm trying to run Monte Carlo simulation to check the parameter estimation in the alignment (btw males and females) for a test with 99 binary items. Here is the code: MONTECARLO: NAMES = l1l99; NGROUPS = 2; NOBSERVATIONS = 2(5000); NREPS = 10; GENERATE = l1l99(1); CATEGORICAL = l1l99; ANALYSIS: TYPE=MIXTURE; ESTIMATOR = ML; ALIGNMENT = FIXED; PROCESSORS = 8; ALGORITHM = INTEGRATION; MODEL POPULATION: %OVERALL% f by l1l99*1; %g#1% f BY l1*1.12311; ......... [ f*0]; [ l1$1*1.91102 ]; ......... f*1; %g#2% f BY l1*0.87657; ......... [ f*0.98557 ]; [ l1$1*2.01662 ]; ......... f*1.27121; In the MODEL RESULTS, the population parameters are either 1 (for class 1) or 1 (for class 2) for all thresholds (compared with est avg, se, et al); and no comparison for loadings. Any problems with my coding? Thanks! 


Please send your full output to Support along with your license number. 

Youhua Wei posted on Thursday, May 17, 2018  6:08 am



Hi Dr. Muthen, I have sent the full output to Support along with my license number. Thank you. Youhua 


Hello I am trying to asses measurement invariance between two groups in a three factor model that has a combination of continuous, binary, and categorical ordinal indicators. Specifically, one factor has all continuous indicators, the second factor has continuous indicators plus one binary indicator, and the the factor has continuous, binary, and ordinal indicators. Am I correct in my understanding that I can only asses the configural and scalar models in this case? From the user guide, it is clear that this is true when all indicators are binary, but it is not clear if it applies when indicators are mixed continuous/binary. Thank you! 


Focusing on the configural and scalar models is the simplest approach. It is possible to do a more itemspecific approach, but it is cumbersome. 


I am using MGCFA to examine measurement noninvariance across sex of a personality questionnaire. After global tests, I test each item separately to try to find the problematic items. I would like to report the effect sizes on the item's factor loadings and thresholds. Is there are way I can compute this from the output? 


If you have a definition of effect size in this context, you can use Model Constraint to express it. SEMNET might provide a definition. 


Thank you! 

Tom Bailey posted on Saturday, December 22, 2018  3:06 am



Hi there I'm getting slightly different results for my metric model when I run the 'new' overall CONFIGURAL METRIC SCALAR syntax and when I look at the individual models myself, just wondering why that might be. For the overall option MODEL: POSGAI BY RPGS1* RPGS2 RPGS3 RPGS4 RPGS5 RPGS6 RPGS7; RPGS4 WITH RPGS5; POSGAI@1; [POSGAI@0]; When I run metric I go (for each of 3 groups) Model gr1: [RPGS1RPGS7]; RPGS4 WITH RPGS5; Cheers Tom 

Tom Bailey posted on Saturday, December 22, 2018  3:08 am



The two methods were the same for configural btw (with the BY statement in each group as well to allow loadings to differ as well), just not the same for metric invariance. Cheers Tom 


Do you get the same number of parameters for your 2 different metric runs? The same logL? 

Tom Bailey posted on Saturday, December 29, 2018  4:18 am



No, when I run my own metric model, it has 2 more df (and thus 2 less paramters) than the metric model in CONFIGURAL METRIC SCALAR My own syntax is VARIABLE: GROUPING IS Group (1 = gr1 2 = gr2 3 = gr3); MODEL: POSGAI BY RPGS1* RPGS2 RPGS3 RPGS4 RPGS5 RPGS6 RPGS7; RPGS4 WITH RPGS5; POSGAI@1; [POSGAI@0]; Model gr1: [RPGS1RPGS7]; And so forth for the 3 groups. Thanks, Tom 


You can compare the two outputs using TECH1 or comparing the results to see where the difference is. 

Ti Zhang posted on Wednesday, January 02, 2019  10:02 pm



Hi, Dr. Muthen, I am trying to understand how thresholds change would affect the latent mean difference estimations across 2 groups under configural invariance model using Monte Carlo command. What I have found is that under data generation model, when I changed the first item's threshold in one group and all other parameters remain the same across groups, the latent mean difference estimations are very biased (given the population parameter value for mean difference has been decided, like 0,2, 0.5). When I changed the other items' thresholds, however, the latent mean differences do not have bias (pretty close to the population value). I am wondering why the first item is so special. Would the first item is developed in some ways in Mplus so the difference exist? Thank you. 


Send the 2 outputs showing bias and no bias to Support along with your license number. 

Olev Must posted on Monday, August 26, 2019  3:11 am



Hi, I am conducting the invariance testing (binary data, WLSMV). In the process of freeing thresholds I got the following message: THE MODEL ESTIMATION TERMINATED NORMALLY.THE CHISQUARE DIFFERENCE TEST COULD NOT BE COMPUTED BECAUSE THE H0 MODEL MAY NOT BE NESTED IN THE H1 MODEL. DECREASING THE CONVERGENCE OPTION MAY RESOLVE THIS PROBLEM. THE OPTIMAL FIT FUNCTION VALUE FOR THE H0 MODEL IS SMALLER THAN THE OPTIMAL FIT FUNCTION VALUE FOR THE H1 MODEL. THE FIT FUNCTION VALUE FOR THE H0 MODEL IS 0.0049894 THE FIT FUNCTION VALUE FOR THE H1 MODEL IS 0.0054055 VERIFY THAT THE MODELS ARE NESTED USING THE NESTED OPTION. Please suggestions how I must continue. Nesting is correct, it worked in previous steps. Sincerely, Olev 


You can try the Nested option but from the output message it is clear that the H1 model fits worse than H0 which should not happen for nested models. Check that you have set up the two models correctly. You say that "previous steps" have shown nestedness  check what's different here. If this doesn't help, send your relevant outputs to Support along with your license number. 


P.S. To try to improve the H1 model fit value, you can also run the less restricted model with either Starts = 20 or with a sharper convergence criterion. 

Back to top 