Hi there, I am attempting to produce a final model which takes into account measurement invariance I have found by sex. We are working with a two factor solution, and when we regress the two factors on sex we find significant paths for both factors. Using the modification indices, we then identified several observed variables (loading on factor 2) which show significant paths to sex. When these paths from observed variables to sex were included in the model, the relationship between factor 2 and sex became non-significant. We then took out the path between factor 2 and sex. However, despite the fact that this path was not significant, it had a dramatic effect of the chi-square value, suggesting that the path needed to be in the model. We then added the path back in, but set it to zero. This fixed the chi-square problem. We are unsure as to why this would happen. Why are we getting such dramatically different chi-square values if the paths we remove are non-significant and why does including a path that is set to 0 have such a dramatic effect on the chi-square value? Thank you so much for you time, Kaja
I am attempting to compare nested measurement models to test for measurement invariance. I attempted to freely estimate the models paramters for the "female" sample by using the following command:
MODEL: support BY facesco@1 facesad; combat WITH support;
MODEL female: support BY facesco@1 facesad; combat WITH support;
However, I am receiving the following warning:THE RESIDUAL COVARIANCE MATRIX (THETA)IN GROUP MALE IS NOT POSITIVE DEFINITE. THIS COULD INDICATE A NEGATIVE VARIANCE/RESIDUAL VARIANCE FOR AN OBSERVED VARIABLE, A CORRELATION GREATER OR EQUAL TO ONE BETWEEN TWO OBSERVED VARIABLES, OR A LINEAR DEPENDENCY AMONG MORE THAN TWO OBSERVED VARIABLES. CHECK THE RESULTS SECTION FOR MORE INFORMATION.PROBLEM INVOLVING VARIABLE FACESCO.
Am I specifying the variant model incorrectly, or does this warning relate some idiosyncracy about the male sample in my dataset? Thank you.
There seems to be a problem with the variable facesco. Does this variable have a negative residual variance in the male group? This is usually the problem.
Xuan Huang posted on Wednesday, May 16, 2007 - 10:03 am
Dear professors: Could you give us some suggestions on testing measurement invariance in Mplus? We want to test whether parenting measures are equivalent across mothers and fathers. Because the mother and the father are from the same family unit, the two groups in comparison are not independent.
Can we take care of non-independence across groups in multi-level, multi group CFA in MPLus? Thanks a lot in advance.
You can do this by taking a multivariate approach where each observation has data for both mothers and fathers. You would then have factors for mothers and factors for fathers and you would place equalities on the measurement parameters to test for measurement invariance. See Example 6.14 which is a growth model and just imagine it without the growth component.
I have a related question. How should one test for measurement invariance of a scale across two groups (defined at Level 1, e.g., male/female) when L1 units are nested in L2 units (e.g., class)?
I think that a multiple-group approach would not be ideal because male students are not independent from female students in the same class - and, to my knowledge, grouping=gender, type=complex, & cluster=class, would only adjust for dependence within each gender group, but not across gender groups. Is this correct?
I also don't think that a multivariate approach would work like it does for the above mother/father scenario. Each observation at the class level would have multiple males and females as opposed to typically a single mother and father at the family level. Maybe aggregrating needs to be done?
I used Mplus to test a multiple-group SEM model with complex data (clustering and stratification). I used type=complex to address the complex nature of the data - but am wondering how dependence of units across groups is handled by Mplus and whether my tests of structural invariance constraints, which I think assume the groups are independent, are biased?
It is true that if clusters contain both males and females, the males and females are not independent groups. With TYPE=COMPLEX in Mplus, an adjustment has been made to take this lack of independence into account.
So you are saying that Mplus (with type=complex) not only accounts for cluster-based dependence within groups (e.g., males and females), but also cluster-based dependence between gender groups in a multiple group analysis?
Then with Xuan Huang's situation involving mothers and fathers above, is it appropriate (as an alternative to the multivariate approach) to use a multiple group model (group = parent) with type=cluster - which would typically result in a single observation for each cluster within group? If so, should this lead to equivalent results with the multivariate approach you suggested?
Hi Linda and/or Bengt, I am having a problem testing for invariance of a second-order CFA with 3 first-order factors and one second-order factor using robust estimation. Using ordinary ML, we encounter no problems though the CFI is lower than what one would like so we wanted to see what the Robust estimate of CFI looked like. When we run the same exact invariance model (across whites vs minorities) we receive a error message saying that the model is not identified due to a problem involving parameter 77. Parameter 77 is in the value in the Alpha vector for the second-order factor in the minority group. Let me know if you need me to send you our input file (and data file). Thanks! Rick Zinbarg
Todd Little and his colleagues propose the "effects coding" method to identify MACS models in various papers. This method involves constraining the factor loadings to average 1 in each group (for each factor) and the intercepts to sum to 0 in each groups (for each factor again)Everything else is freely estimated.
Is it possible to implement this in Mplus (I believe so) ? If it is, how would you implement it in Mplus ?
Got it! thanks! More simple than I thought (If I'm right - if not correct me). For a single group, that would give: MODEL: f1 BY y1* (c1) y2 (c2) y3 (c3); f2 BY y4* (c4) y5 (c5) y6 (c6); [y1] (c7); [y2] (c8); [y3] (c9); [y4] (c10); [y5] (c11); [y6] (c12); [F1 F2];
Yes, sorry I did not specify it. It does run correctly and provide fit indices equal to those obtianed under different constraints (marker variables, latent standardization). I was just wondering if t could be simplified. For instance, I tried f1 BY y1* y2 y3 (c1-c3); and it did not work (told me I had more constraints than variables). But this way, everything is alright.
Thank you very much Linda! For those who followed this discussion, the previous input can thus be simplified to (and it work): MODEL: f1 BY y1* (c1) y2-y3 (c2-c3) ; f2 BY y4* (c4) y5-y6 (c5-c6); [y1-y6] (c7-c12); [F1 F2];
I have recently done some multi-group confirmatory factor analysis models to test for measurement invariance across three groups. I tested for factor loadings, intercepts, and residual variances, but while invariance held for the first two, it did not for residual variances.
I know that measurement invariance requires the three to hold, but what does the above mean in terms of the interpretability of the estimates? Because the intercept and the loadings are equal, can the estimates be compared across groups? Is it just that the precision is different? What limits does this pose on comparative analyses?
Dear Mr and/or Ms Muthén, I am having a problem checking for measurement invariance of Demand Control Questionnaire in hospital workers of Brazil and Sweden using multiple group analysis. When I performed Confirmatory Factor Analysis for each country separately, using WLSMV estimator for categorical variables, I found that the best fit model was with 3 factors: D1 by i1–i5, D2 by i6-i8 and D3 by i9-i10) for both countries, but with different crossloadings (Brazil: D1-i6 and Sweden: D1-i8). Is it possible to proceed with multiple group analysis? Do these models have equal factorial structure? When I tried multiple group analysis, not considering the crossloadings, first of all, I fixed the highest loadings of each dimension in 1. After that, I used the default of Mplus (the loading of the first item of each dimension). However, using this procedure I couldn’t check equal loadings for the items I have fixed in 1, so I repeated the procedure fixing each factor variance in 1, but the results were totally different. Was it correct? What procedure should I use? And why do the results differ? Thanks in advance, Yara
If you have one factor loading fixed to one and the factor variance free or the factor variance fixed to one and all factor loadings free, you should get the same chi-square value. If you do not, please send your full outputs and license number to email@example.com.
Thanks, Linda. The chi-square value are very similar, but not equal. I will send you the outputs. Is it correct to fix the factor variance in 1 to check equal loadings? And, how about the factorial structure? Do you think I should proceed? Thanks again, Yara
Thank you for your reply. But, will it be possible to estimate response shift using Oort or Schmitt's approach when controlling for covariates (e.g. comorbidities)? Also, I have data for two time points.
Do you recommend to treat time points as multiple groups? This is my unrestricted model which has 6 latent variables (times 1 to 6)
model: sns1 by lxscl07 lxscl16 lxscl35 lxscl62 ; sns2 by lzscl07 lzscl16 lzscl35 lzscl62 ; sns3 by lqscl07 lqscl16 lqscl35 lqscl62 ; sns4 by lvscl07 lvscl16 lvscl35 lvscl62 ; sns5 by lwscl07 lwscl16 lwscl35 lwscl62 ; sns6 by lbscl07 lbscl16 lbscl35 lbscl62 ;
in a next model I added residual covariances, then tested altered items to be fixed. Then I constrained factor loadings to be invariant over time. But for a test of strong factorial invariance I have to fix the intercepts to be invariant over time...How can I do this?
Hi, I am running into a puzzling result when trying to compare configural versus metric invariance of factor loadings across groups. I have a model that is essentially configurally invariant (I had to constrain a few loadings to be equal across groups to prevent some Heywood cases) and am comparing it with a metric invariant model in which all the loadings are constrained to be equal across groups. My understanding was the configural invariant model could not have a larger chi-square than the metric invariant model but this is precisely the result I am getting. Is my undestanding incorrect or does this seem odd to you too? Thanks! Rick Zinbarg
I am doing a multigroup invariance test. When I constrained parameters (factor loadings or residual variances or factor correlations or all three together), only the unstandardized parameters are constrained (having equal values for the two groups), whereas standardized parameters remain different in values Am I doing the right thing?
The standardized coefficients will be different even when the unstandardized coefficients are equal because the standardization is done using the standard deviations for each group not the overall standard deviations.
I am working on cross validating a Second Order CFA with MPlus. I was wondering if you would be able to guide me through this process. Do you have any suggestions on the preferred order for imposing constraints on a 2nd order CFA when you start with a fully non-constrained model (intercepts & loadings free across groups – but means & intercepts of latents set to zero for identification purposes).
I would test the first order factors first using the strategy shown in the Topic 1 course handout on the website. See multiple group analysis. Once measurement invariance is established for the first-order factors, I would test if for the second-order factor.
I have tested for measurement variance among the first order factors. They cross validated well now I am planning to do the second order CFA. Are the steps in measurement invariance similar to that in first order CFA? I believe the means of the first order should be set to 0, is that right?
Step 1- fully non-constrained model Step 2- constrain factor loadings step 3- constrain intercepts and loadings step 4- constrain intercepts, loadings, and residual variances step 5- constrain intercepts, loadings, and residual variances and error variances step 6- constrain intercepts, loadings, and residual variances, error variances, and covariances
Dear Prof. Muthen, I am running MGCFA with a four factor-model where two factors have only one indicator. I have already checked for the metric equivalence and got an acceptable model fit. In the next step by checking for the scalar invariance, the modification indices suggest to free the intercepts of two indicators (y8, y9), both loading on the same factor. If I do so I get this error message „THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE COMPUTED. THE MODEL MAY NOT BE IDENTIFIED. CHECK YOUR MODEL. PROBLEM INVOLVING PARAMETER 48“. The parameter 48 is the alpha-value for my latent construct perform. Would you have any suggestions how I can solve this problem?
Here is my syntax: MODEL: trust BY y1 y2 y3 y4 y5; y2 with y3; y1 with y2;
Hi, I have a multigroup CFA model with UVI identification. I first estimated loadings of items freely in each group, then constrained them to be the same across the two groups, to test for support for measurement invariance. So, in the STANDARDIZED solution for the constrained model, all items have the same loading except for the first item for each factor, which shows up as 1.00 in the first group, with no significance associated with it; but as a real estimate with a significance associated with it for the second group.
My question is: Why is the first item per factor set at 1.000 for the first group in the STANDARDIZED solution for the constrained model, especially if I used UVI and not ULI identification? For example, the standardized solution for the constrained model shows the following.
Actually, I think I figured out the answer to my own question, so you can ignore the previous post! I think the answer is that even when constraining the loadings to be the same across the groups, I still need to have a * (an asterisk) for the first loading for each factor, in order for UVI and not ULI identification to be used!
anonymous posted on Wednesday, March 21, 2012 - 9:30 am
I am testing measurement invariance of a measurement model across two different age groups. However, when I restrict the factor loadings to be invariant across groups using stratified, weighted, and clustered data (WLSMV estimator) with categorical ordinal response scales, I receive the following error: THE MODEL ESTIMATION TERMINATED NORMALLY THE CHI-SQUARE COMPUTATION COULD NOT BE COMPLETED BECAUSE OF A SINGULAR MATRIX.
THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE COMPUTED. THE MODEL MAY NOT BE IDENTIFIED. CHECK YOUR MODEL. PROBLEM INVOLVING PARAMETER 136.
THE CONDITION NUMBER IS -0.376D-15.
In addition, I noticed that I obtain somewhat different fit index estimates depending on whether I use Mplus 5.2 or 6.1. Any reason?
When intercepts are free, factor means must be fixed at zero. See the Topic 1 course handout under Multiple Group. All of the inputs for testing measurement invariance are given.
Hans Leto posted on Monday, April 23, 2012 - 11:44 am
I am having problems testing measurement invariance with 2nd order factors. I am following the procedure described in the handout number 1. I do not know how to include a 2nd order factor in the example described in the slide 210.
Could you provide me more guidance. I describe an example (F3 is the 2nd order factor):
Model: f1 by y1-y5; f2 by y6-y10; F3 by f1 f2; [f1-f2@0]
Model g2: f1 by y2-y5; f2 by y7-y10; F3 by f1 f2; [f1-f2@0] [y1-y10]
It isn't clear if you get a syntax error or a modeling error. I'll address both.
You may get a syntax error by your statements
[f1-f3@0] !just 1st factor of the 2nd order fixed to 0 [y1-y20]
because you don't end them with semi colons. On the other hand, what you are posting may not be what you use in your run.
You will get a non-identification error because your second group frees up the intercepts for your y's which means that the factor mean difference in the second-order factor cannot be identified. Leave out the statement
and the default will give you the correct equality across groups of these intercepts.
Hans Leto posted on Tuesday, April 24, 2012 - 2:33 am
Thank you for your answer. But it does not work, is not a syntax error (sorry). It is an error about "THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE COMPUTED."
I am quite new in testing invariance. My problem is specifying the 2nd order factors, because I already tested it only for 1st factors and ran perfectly.
I left out the [y1 y20], but it did not work.
My questions would be:
1. Do I have to fix to 0 the 2nd order factors in the general model or only the 1st order(in my example I only fixed to 0 the 1st order factors)?
2. In the specific group (g2)do I have to fix to 0 the 1st order factors (f1-f3) of my 2nd order?
3. Do I not have to free up the intercepts for the items in g2([y1-y20])?.
I have tried all these but still gives me the same error. I do not know what I am missing.
a quick question re: testing for configural invariance (equal form) that may have a simple explanation: when I test for equal form with both groups, my df and chi square do not equal the sum of the df and chi-square when I test each group separately (using USEOBSERVATIONS). As an example my df for both groups is 13, yet my df in the model for equal form = 31, not 26. Is this due to a mis-specification somewhere on my part? As an aside, rather than use a marker indicator I am fixing variances to 1 and I was wandering if this might make a difference though I doubt it. Group sizes are 131 and 128.
I'm hoping to double check that I'm using the correct code for testing measurement invariance of some scales across different racial/ethnic groups. I've been using the video and handouts from Topic 1, but I'm trying to test invariance across 3 groups instead of 2. Would you mind confirming that I've got the correct code for the second test-- without invariance?
Model: BE by m_a3 m_a31 m_a28 m_b50; EE by m_a62; m_a62@0; CE by m_a72 m_a74 m_a79 m_a80; [BE@0 EE@ CE@0];
Model AfAm: BE by m_a31 m_a28 m_b50; EE by ; CE by m_a74 m_a79 m_a80; [m_a3-m_a80];
Model Latino: BE by m_a31 m_a28 m_b50; EE by ; CE by m_a74 m_a79 m_a80; [m_a3-m_a80];
I have a question concerning measurement invariance, specifically when testing for equal latent variances and latent means (i.e., population heterogeneity). If one chooses to freely estimate all indicators in baseline models (and ID'ing the model by setting the latent variances to 1), is it the case that one must use a separate baseline model in which marker indicators at 1 and variances are freely estimated if one wants to subsequently test for population heterogeneity (i.e., the baseline model must have variances freely estimated to subsequently test for equal latent variance(s))?
I think you are asking if it matters whether you set the metric of the factors by having a loading fixed at one or the factor variance fixed at one when you later compare structural parameters. If you fix the factor variances to one, you need to do this in only one group so the test of whether the factor variances are different across groups is a test of factor variances one in one group and free in the others verus factor variances one in all groups.
I will try to articulate my question more clearly:
In my case I have a 2-factor model. I identify the model by setting the factor variances to 1 rather than the marker indicators for both groups. In other words, in the equal form solution my variances are already fixed to 1 in both groups (to ID the model) so no meaningful comparison could be made via the chi-square test of model fit for the subsequent test of invariant factor variances across groups, right? So to test for factor invariance, I would use a baseline model instead where I ID'd the model by using marker indicators rather than variances?
In multiple group analysis, you need to fix the factor variances to one in only one group. They can be free in the other groups. A meaningful test of whether the variances differ across groups is a test of factor variances one in one group and free in the others verus factor variances one in all groups.
Hi, I am working on a multi-group CFA for testing measurement invariance across 5 samples. The hypothesized model is a two order factor model. Aiming at testing metric invariance, the following syntax failed to work. What is the problem?
Thank you for your advice.
DATA: FILE IS data.prn;
VARIABLE: NAMES ARE e1-e3, m1-m4,d4-d5,d9, g; USEVARIABLE ARE e1-e3, m1-m4,d4-d5,d9, g;
When the intercepts are free, all factors means must be zero. The mean of g is not fixed at zero.
Fred Danner posted on Friday, February 15, 2013 - 11:33 am
Hi, I am testing second-order measurement invariance, using MLR estimation. Unconstrained model gives reasonable results. Model constraining factor loadings runs fine but cuts the N in each group in half! Why??
UNCONSTRAINED Model: f1 by x1 - x6; f2 by x7 - x10; f3 by x11 - x13; f4 by f1 f2 f3; [f1 - f3 @0]; Model g2: f1 by x2 - x6; f2 by x8 - x10; f3 by x12 - x13; [x1 - x13 f1 - f4 @0];
FACTOR LOADINGS CONSTRAINED Model: f1 by x1 - x6; f2 by x7 - x10; f3 by x11 - x13; f4 by f1 f2 f3; [f1 - f3 @0]; Model g2: [x1 - x13 f1 - f4 @0];
Dear Drs. Muthen, I have two questions about using TYPE=CLUSTER in data sets that have repeated observations of the same individuals.
1) In one data set, I have measures at two time points, six years apart. I am entering data into MPLUS in the long format and specifying my DV as a latent variable, which is regressed on age. I use TYPE=COMPLEX and cluster on subject ID. Is there a name for this sort of analysis?
2) In another data set with 1 to 7 repeated measures of the same individuals, I wanted to compare age groups' (adolescent vs. adult) means on a given variable, even though they are the same individuals. I ran a simple regression with the age groups entered as dummy variables. Again, I imported the data in long format and used TYPE=CLUSTER to cluster on subject ID. Is there any reason that it would be incorrect to draw inferences about the mean differences between the age groups based on this regression?
1. I know of no special name for this model. It is a latent variable model.
2. This sounds okay.
Tom Booth posted on Saturday, March 09, 2013 - 10:42 am
I am trying to fit a second-order invariance model with categorical indicators using the delta method for 2 groups. I was interested in following the suggestion of Chen, Sousa and West (2005) and testing invariance in the following order;
1: Configural 2: 1st order metric (loadings) 3: 2nd order metric (loadings) 4: 1st order scalar (thresholds) 5: 2nd order scalar (intercepts)
Where the following constraints are used across groups in each model:
1: First and second order loadings free in both groups (first item/factor loadings fixed to identify). Item thresholds free in both groups. First and second order factor means fixed to 0 in both groups. Scale factors fixed at 1 in both groups.
2: As (1) but with first order loadings constrained equal.
3: As (2) but with second order loadings constrained equal.
4: As (3) but with item thresholds constrained equal, first order factor means free in group 2, and scale factors free in group 2.
5: As (4) but with second order factor mean free in group 2 and first order factor means constrained equal.
I am not sure if this sequence is correct and after noting discussion here and notes on the Mplus site on the Millsap and Tien (2004) paper, I fear I have missed something crucial. Any guidance on the matter would be much appreciated.
There are different approaches for binary and polytomous items. With binary items, Step 2 adds scale factors differences across groups which makes the model not identified when the thresholds are different. With polytomous items, the Millsap-Tien approach can be followed.
Tom Booth posted on Saturday, March 09, 2013 - 11:35 am
Thank you for the very swift response. Just for clarity, my items are polytomous. From your response, I take it that in principal there is no issue following the Chen, Sousa and West sequence, so long as the identification constraints of Millsap-Tien are followed, and that these are different to the basic model specs I note above?
Tom Booth posted on Sunday, March 10, 2013 - 3:35 am
Thanks Bengt. I had thought from the discussions that with the categorical nature of the data and use of WLSMV, loadings and thresholds needed to be considered together, not split as in the above stages.
Tom Booth posted on Sunday, March 10, 2013 - 4:52 am
Sorry, I have a further follow up question. Within the sequence of models above, when thresholds are constrained across groups, scale factors are freed in the second group. I have 3 questions on this;
1- Is this correct? 2- Is this necessary? 3- If one then subsequently releases thresholds, partial invariance, do the associated item scale factors need to be fixed again?
Loadings and thresholds are considered together in the binary case.
Re your 4:52 post:
1. Scale factors are needed whenever you make comparisons of the factors, that is, in the metric and scalar cases.
2. Yes, because scale factors contain 3 things: Loadings, factor variances, and residual variances. So even when loadings are invariant, scale factors won't be - in particular you want to take into account the factor variance variation across groups.
3. You fix scale factors in the configural case because in that case you are not comparing factors across groups.
Tom Booth posted on Sunday, March 10, 2013 - 1:49 pm
I am doing a 4 group test of measurement invariance with ordered categorical items (4-point response set). The measure is invariant on loadings, but not on thresholds.
I would like to examine specific contrasts (ethnicity within gender and gender within ethnicity). Do you know of any problems with using the MODEL CONSTRAINT command to simultaneously examine threshold differences across items per my contrasts of interest? I am thinking it may simplify the analyses. I could not find an example in mplus examples or the literature...
Try VARIANCES = NOCHECK in the DATA command and if that doesn't resolve it, send files to Support.
marlies posted on Tuesday, October 15, 2013 - 6:49 am
Dear Linda and Bengt,
My question is the following, which has been asked before: I would like to test for measurement invariance using the difference in McDonald's non-centrality index (NCI) as recommended by Meade et al (2008) in "Power and Sensitivity of Alternative Fit Indices in Tests of Measurement Invariance" J Appl Psych.
You (Linda) replied that Mplus does not give an NCI index. However, since my sample is very big, I really would like to report it next to the CFI. Do you have any formula or idea how I can derive de NCI index (maybe from other given fit indices)?
You calculate the formula twice: once for you Configural Invariance model and once for your Measurement Invariance model. Then, you substract the value of the CI model from the value of the MI model. This is the final Mc Donald's NCI difference value you can report. The cut-off value for an invariant model differs per number of factors and items. In the article of Meade and Johnson (2008) you can find at page 586 a table with these cut-off points. (Meade, AW & Johnson, C (2008). Power and sensitivity of alternative fit indices in tests of measurement invariance. Journal of applied psychology, 93, 568-592)
Ian Koh posted on Friday, December 13, 2013 - 12:26 am
Dear Bengt and Linda,
I ran a test for factorial invariance (six-factor structure, with partial measurement invariance) across two groups following the steps mentioned in Byrne (2011). Out of curiosity, I'd like to ask: Are the configural model's parameters estimated using the whole sample, or are they estimated from the group samples? Thanks for your help.
The model for each group is estimated using the data from that group.
Ian Koh posted on Monday, December 16, 2013 - 4:38 pm
Thanks Linda! This question follows from my previous post (dated Friday, 13 December 2013).
Before fitting the configural model, I first fitted two baseline models: one for 5-year-olds and one for 6-year-olds. The 5-year-old group didn't require any modifications to the original model specification; however, the 6-year-old group required one extra cross loading, else there would've been a nonpositive definite matrix message. My configural model converged without any issues when including the extra cross loading for the 6-year-old group (as expected). However, the configural model also converged without any nonpositive definite matrix message when the cross loading was removed.
I also tested for factorial invariance over gender using the same model specification, encountering the same issue for the gender baseline models. (Namely, that the female group required one extra cross loading so that its solution would not have a nonpositive definite matrix error, while the male group required no modifications.) What puzzles me is that this nonpositive definite matrix issue was replicated in the gender configural model simply by removing the extra cross loading for the female group, but specifying the cross loading resulted in an admissible solution.
Why do these two configural models behave differently?
Please send outputs and data if possible. Let's focus on the 5 vs 6 year old runs, so send the 6-year old separate run with and without the cross-loading and the 2-group run of 5 and 6 year olds with and without the cross-loading.
Ellyn L. posted on Thursday, February 06, 2014 - 12:25 pm
Drs. Muthen and Muthen,
I am conducting a multiple group analysis and need to asses mean invariance. I have written syntax that runs successfully, but I'm not sure that I'm including (all of) the correct code. I have consulted both the Mplus user guide and blog posts, and I am looking for some confirmation/input on the syntax I am using to assess mean invariance. I have included the Model input information below. Thanks so much.
ANALYSIS: ESTIMATOR = MLR; MODEL: I ON M So; Su ON I P N; Sh ON I Su So P N; D ON Sh; E ON Sh; M WITH So P N; So WITH P N; P WITH N; MODEL B: [M-E @0];
I am conducting a multiple group confirmatory factor analysis with three comparison groups. The observed variables are categorical. I am using the Theta parameterization. The focus of the analysis is to test for construct invariance between the three groups. I currently have the factor variance, factor loadings and thresholds set to be estimated and equal between all groups (varying within groups, constrained between groups). I would like to do the same for the residual variances. However, as you know, when I use the Theta parameterization, the residual variances for the omitted group are set to 1. This means that to estimate the sought model of construct invariance, I must set the residual variances in the two comparison groups to 1. I have done this.
My question is: When I set the comparison group residual variances to 1, the values for the "Est./S.E." for the residual variances for the two comparison groups reads "Infinity." Is this a problem? Is there a fix for this or a work around? The output contains no fatal error reports.
I am running a multi-group CFA on the nutrition self-efficacy scale and want to test the measurement invariance across 9 coutries. I have followed the handout/video for topic 1 and done the following syntax:
Usevariables are Q3r1 Q3r2 Q3r3 Q3r4 Q3r5 weight;
Grouping is Country (1 = Norway 2 = Germany 3 = Spain 4 = Greece 5 = Poland 6 = UK 7 = Irel 8 = NL 9 = Portugal);
Following on from my previous question, a collegue suggested this: Test for the equality of the loadings, still allowing item and factor intercepts to vary. Then test for the equality of the item intercepts, still allowing factor intercepts to vary. Finally test for the equality of the factor intercepts. How do I allow factor intercepts to vary across countries, how would this look in syntax? MODEL: SEffic BY Q3r1@1; SEffic BY Q3r2* (p2); SEffic BY Q3r3* (p3); SEffic BY Q3r4* (p4); SEffic BY Q3r5* (p5); [Q3r1*] (p6); [Q3r2*] (p7); [Q3r3*] (p8); [Q3r4*] (p9); [Q3r5*] (p10); [SEffic@0]; Q3r1 WITH Q3r2* (p12); Q3r1*; Q3r2*; Q3r3*; Q3r4*; Q3r5*; SEffic*; MODEL 2: SEffic BY Q3r1@1; SEffic BY Q3r2* (p2); SEffic BY Q3r3* (p3); SEffic BY Q3r4* (p4); SEffic BY Q3r5* (p5); [Q3r1*] (p6); [Q3r2*] (p7); [Q3r3*] (p8); [Q3r4*] (p9); [Q3r5*] (p10); [SEffic*]; Q3r1 WITH Q3r2* (p12); Q3r1*; Q3r2*; Q3r3*; Q3r4*; Q3r5*; SEffic*; Many Thanks, Audrey
See the Topic 1 course handout on the website where the input files for testing for measurement invariance are shown under multiple group analysis.
Amy Walzer posted on Tuesday, August 26, 2014 - 7:35 pm
I have tested measurement invariance using the new convenience features in MPlus (i.e., ANALYSIS: MODEL=CONFIGURAL METRIC SCALAR) to see if there is measurement invariance in my measure between men and women.
Now, I'd like to go on to test other types of invariance outlined by Steinmetz et al., 2009:
1.) Invariance of error variance 2.) Invariance of factor variance 3.) Latent value means 4.) Factor covariance
How do I go about doing this?
When I try to build the syntax so that it constrains the necessary elements (e.g., error variances, the variance of each of the factors) in my second group (women) to equal my first group (men), I get an error stating "Model did not terminate normally. Refer to TECH9 output for more information."
David Vachon posted on Saturday, January 03, 2015 - 10:33 am
I am a novice Mplus user and I am trying to test for measurement invariance on a model with censored (from below) indicators. I have 12 child maltreatment indicators loading on 4 latent variables. The User Guide has a nice step-by-step description of measurement invariance procedures for continuous and categorical outcomes (pp. 484-486), but nothing for censored outcomes. I have been trying to base my models on these recommendations, but I do not know enough about Mplus or censored analysis to feel confident that I am on the right track. Could you outline the steps used to test for measurement invariance with censored outcomes (i.e., what should I fix or free at each step)? I have been using the WLSMV estimator.
Considering a invariance testing for unidimensional model, the estimation terminated normally and the scalar against configural model returned p = .0793.
For the configural and scalar models X² p-values are higher 0.05 and RMSEA lower than 0.06; however, CFI and TLI were below than 0.95 reducing to 0.87 in the scalar level test. I found that "... CFI will keep decreasing as a model becomes more restrictive" in invariance testing and the authors concluded that it would be not useful for such purpose (Hong et all., 2003 Educational and Psychological Measurement, Vol. 63 No. 4, August 2003).
You may want to ask this general question on SEMNET.
Eric Deemer posted on Wednesday, February 18, 2015 - 11:16 am
Hello, I'm using the new convenience feature to test the invariance of a CFA model across 3 groups. How does one know which groups are being compared when there are more than 2 groups? Would I use the ALIGNMENT option here?
Also, with the convenience feature is it no longer necessary to calculate the chi-square difference value by hand? The website says chi-square difference testing is carried out automatically with version 7.1, but I still get a warning in my output saying that the chi-square value cannot be used in the regular way.
All 3 groups are being compared. There is no need for alignment.
This is a warning that is always printed. It does not apply to you in this case.
Hervé CACI posted on Monday, March 16, 2015 - 4:32 am
I came across an error message while conducting measurement invariance testing. I was testing a bifactor model for variances invariance between two age groups. After successfully testing for uniqueness invariance, I discarded the following line the 2nd group:
Inatt*; Hyper*; Imp*; g*; ! 3 specific factors and 1 g-factor
This is the only modification I made. Any idea?
THE MODEL ESTIMATION TERMINATED NORMALLY
THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE COMPUTED. THE MODEL MAY NOT BE IDENTIFIED. CHECK YOUR MODEL. PROBLEM INVOLVING THE FOLLOWING PARAMETER: Parameter 73, Group YOUNG: G BY I10 (equality/label)
Hello again. I ma still, having some problems with Muli Group invariance testing. I am testing a multi-country (6) model and all goes fine but in one country. There I have that 2 latent variables have a correlation higher than 1. That means they are seen as same in that specific country but not in all others. Is there a way I can run a model with one of the LV is dropped JUST for that country (group)?
In alternative, I am not sure it is acceptable (and will work) to constrain the two LV to have a variance just below 1 (0.99) by e.g. F1 WITH F2@0.99.
Hello all, I am following up on the discussion on how to calculate the NCI based on mplus output. I am trying to use the formula provided by Marlies, and would like to ask for help regarding the calculation of exp... What values am i supposed to enter in the website that was recommended?
I am sorry if did is not a question suitable for this mplus discussion, but I found no where else to ask...
Yes. Although, more specifically, there is not full invariance. There could still be partial invariance (for all but a few items).
milan lee posted on Friday, August 07, 2015 - 8:15 pm
Hello Dr. Muthen, When testing the partial measurement invariance of factor loadings across three groups, I found that it was achieved after I freed up some loadings for each of the three groups. Specifically, 3 indicators were relaxed in the first group, 2 in the second group and 3 in the third group. Moreover, these freed-up indicators are not identical across groups. Can I still say that the partial metric invariance holds? Thank you!
It is equalities across groups in say loading parameters that you should be relaxing. So I don't understand your statement that the they are "not identical across groups". Unless you mean that some parameters are unequal across 2 of the 3 groups and some other unequal across 2 other groups.
Tahir Wani posted on Saturday, August 08, 2015 - 11:17 am
Dr Muthen, After the invariance measurement I want to do a multi group analysis specifically find out group differences across all paths, does MPlus provide with a mean difference with any specific input command or does it need to be computed manually by chi square difference.
milan lee posted on Saturday, August 08, 2015 - 1:49 pm
Thank you, Dr. Muthen! Yes, I meant that. So my questionnaire has 15 items and the partial metric invariance across 3 groups held when: 4 items/indicators were freed only in group2 (resulting in their loadings in this group to be different from the other two groups) and another 2 items were freed only in group3. Most relevant literature considers that partial invariance holds as long as loadings of 2 items remain equivalent. But are there any criteria to rule partial metric invariance under the circumstance I encountered as above? Again, thank you very much!
The statistical rule is that you want an identified model; the output will tell you if it isn't. The substantive rule is that you want to have sufficiently many equalities that you believe you are measuring the same thing.
I'm testing measurement invariance across gender groups, and statistical fit (CFI, RSMEA) is better in constrained models. I haven't seen that before, so I don't know if it's ok. If the difference (+) is bigger than cutoff criteria, should I worry? Or is ok if the constrained tests fit better, no matter how much.
A warning appears in my output file: "MODINDICES option is not available when performing measurement invariance testing with multiple models with the MODEL option of the ANALYSIS command". I don't understand it because I've used modindices command performing measurement invariance and it worked.
My inp is:
VARIABLE: Names are v6 v394 v395 v396 v397 v398 v399;
They are not available when you test for measurement invariance using the following options:
model = configural metric scalar;
Tahir Wani posted on Monday, October 12, 2015 - 6:26 am
Dear Dr Muthen, I found that my dataset was not normal so used MLR estimator. I check the model for invariance and found it invariant at configular, metric and scalar levels for all the 5 demographic variables I had used. Now I ran the model e.g for Gender(male and female) and got the measurement and structural paths for both males and females. I am now confused that should I report these paths and show how they differ for the other group or do I have to do chi-square difference test for the paths. If so can we do that for MLR and please if you can guide how to do it. I am familiar with ML and AMOS to perform it but since I am new to MPlus have got no clue. Thanks
If you want to compare the paths across groups, you can do chi-square difference testing or use the Wald test of MODEL TEST. How to do difference testing using MLR is described on the webiste. See How To in the left column.
Masa Vidmar posted on Monday, October 12, 2015 - 2:12 pm
I am running a multiple-group CFA with two factors. I did a weak invariance test across two groups (langugaes). I did not find support for the weak invariance (only configural). Now I would like to test partial invariance by only constraining one loading at the time - I would like to examine if all indicators are a problem or possibly just one. Is that ok to do? Also, I do not know how to write an input for this (for only constraining one loading). Can you help me? Below is the extract from the input for testing weak invariance:
model SLO: pips_lit by Writing @1; by iar (1) by letters_p (2) word read_p (3); pips_mat by sums@1; by numbers (4); by math (5); [Writing] (100); [iar]; [letters_p]; [word] ; [read_p]; [sums] (200); [numbers] ; [math];
model GER: pips_lit by Writing @1; by iar (10) by letters_p (20) word read_p (30); pips_mat by sums@1; by numbers (40); by math (50); [Writing] (100); [iar]; [letters_p]; [word] ; [read_p]; [sums] (200); [numbers] ; [math];
Tahir Wani posted on Tuesday, October 13, 2015 - 3:31 am
Dear Sir, Yes exactly I want to compare the paths across groups, but the thing is that I haven`t hypothesised to check the difference on any particular path, in other words I want to perform it on all the structural paths available. So I am not sure what a constrained or nested model here means and which one will be a baseline model. If you can please explain or help me with this matter. Regards
I am trying to test measurement invariance across several cultural groups (about 11 countries). I watched the topic 1 video and see that most of the discussion is centered on testing invariance for two groups (male/female). Are there any issues with testing invariance on several groups and if not, do you recommend using MIMIC or multi-group CFA?
Jinni Su posted on Friday, October 30, 2015 - 4:08 pm
Dear Dr. Muthen,
My colleagues and I ran multigroup CFA models to evaluate measurement invariance across race (European American vs African American). And reviewers commented that we may have established invariance across both race and income/SES given the possible confound between race and income/SES. Do you have any advice on how to tackle this issue? Is there a way to evaluate measurement invariance across race while controlling for SES?
Ali posted on Wednesday, February 03, 2016 - 12:17 pm
I have a few of questions about CFA and measurement invariance across 11 groups. First, I run one-factor model with four nominal indicators 11 times, but the output didn't have the values of CFI, TLE,RMSEA. Is it because of nominal indicators?
Second question is that how can I do measurement invariance for nominal variables? I check the MODEL = CONFIGURAL METRIC SCALAR, however, it can't work for the nominal variable.
Q2. You would have to specify invariance yourself.
Ali posted on Thursday, February 04, 2016 - 6:48 am
I have tried "TECH 10" in the output command, but it still not showed CFI,TLE,and RMSEA.
As for testing measurement invariance on nominal indicators, I am not sure if my codes are specified correctly in the model command. Also, I run the codes, but it showed me"*** ERROR Group AUS has 0 observations. *** ERROR Group CAN has 0 observations. *** ERROR Group GBR has 0 observations."
VARIABLE:NAMES ARE CNT u1-u4; USEVARIABLES ARE u1-u4; GROUPING IS CNT (1=HK 2=JPN 3=KOR 4=QCN 5=SGP 6=TAP 7=AUS 8=CAN 9=GBR 10=NEZ 11=USA) NOMINAL are u1-u4; MISSING ARE ALL (7-9);
MODEL: f1 BY u1-u4 ; MODEL JPN: f1 BY u1-u4 ; MODEL KOR: f1 BY u1-u4 ; MODEL QCN: f1 BY u1-u4 ; MODEL SGP: f1 BY u1-u4 ; MODEL TAP: f1 BY u1-u4 ; MODEL AUS: f1 BY u1-u4 ; MODEL CAN: f1 BY u1-u4 ;
I am conducting a 2-group measurement invariance (MI) testing on a model with three latent factors and 13 continuous indicators. I am using MLR as estimator because of non-normality. I have used the MODEL IS CONFIGURAL METRIC SCALAR command and at least partial scalar MI is supported. Now I would like to test strict MI. Hence these questions:
1. Could strict MI be tested using MODEL IS SCALAR command and add restrictions for invariant variance across groups? 2. If yes, do I just add e.g. y1(1);y2(2)... under MODEL to keep variances equal across groups? 3. Is the Chi-square test provided with the MODEL IS CONFIGURAL METRIC SCALAR command already corrected for the MLR estimator? Or do I calculate the SB scaled chi-square by hand? My interpretation of the user guide is that the scaling correction is carried out automatically.
1. I do not think so but you can try. 2. Yes. These are residual variances in the factor model. 3. Yes.
Paula Vagos posted on Wednesday, March 30, 2016 - 2:41 am
Dear Doctor Muthens, I am testing measurement invariance of a self-report instrument that uses a five-point likert type scale. The data is not multivariate normal, so I am using the MLR estimator. I tested configural, then metric and then scalar invariance, but I got the feedback that because I am using ordinal variables, I should only test for configural and scalar invariance.
It is a bit confusing. Millsap's book on measurement invariance describes this. To quote an email from him: "The invariance constraints on the thresholds in my book on 5.19 are meant to apply to the configural case. Once metric invariance is imposed, one can actually release some constraints on the thresholds and still achieve identification." For this reason it may be safer to only test for configural and scalar invariance.
Paula Vagos posted on Thursday, March 31, 2016 - 2:40 am
Thank Doctor Bengt, It is actually very confusing for me...
I would dare ask you a follow up question. What I should do, then, is: 1. Test for configural invariance with all thresholds constraint to be equal and loadings free 2. Test for scalar invariance with all loadings and thresholds constraint to be equal 3. Free one threshold at a time in trying to achieve a non-significant chi-square difference?
Again, thank you for any help you might give me on this subject!
Thank you again Dr. Bengt, I wasn't aware of this update.
Still, I had understood that Millsap had suggested constraining the thresholds/ intercepts when testing for configural invariance, whereas configural invariance as calculated using the MODEL = configural scalar option does not apply this constraint.
So, if I am understanding correctly, you are suggesting comparing the completely free model (i.e., configural) with the scalar model, disregarding an eventual threshold constraint at the configural level?
I just did invariance measurement using the MODEL = configural scalar; PARAMETERIZATION = theta; commands (I cannot check for metric MI because I have variables loading onto more than one factor). The output gives me non-significant comparison for scalar against configural, which should be good. But: the CFI values slightly improve from the configural (.942) to the scalar (.943) model. Are they not supposed to detoriate with adding more constraints?
No absolute values of fit statistics can be compared using WLSMV. Chi-square difference testing is done using the DIFFTEST option.
Pia H. posted on Tuesday, April 05, 2016 - 3:32 am
Dear Dr Muthén
thank you for the quick reply. If have a follow-up question: Is it possible to change the estimator or do I have to use the DIFFTEST manually? If the latter applies, can I have mPlus estimate the configural and the scalar model and then compare them using the DIFFTEST oder do I have to add the constraints in the manuscript manually and then use the DIFFTEST command?
Not sure what you are asking. You can't do DIFFTEST manually - too complex. You can estimate the model with ML and consider likelihood-ratio chi-square difference testing. But you won't get CFI with ML when you have categorical outcome which I think was your interest.
Pia H. posted on Tuesday, April 05, 2016 - 6:44 am
Dear Dr Muthén
I might have put too much emphasis on the CFIs in my first post. In fact, I only need to know if my model has configural and scalar measurement invariance. Can I estimate two models using the model = configural command for the first model and model = scalar for the second one and then compare the two based on the chi square significance using the DIFFTEST command?
Dear Sir or Madam, I want to test the measurement equivalence of my factor (HIVOEXT) over two groups (1 = patients, 2 = others). Are these syntax correct to estimate the configural, metric and scalar invariance? Thank you very much in advance
USEVARIABLES ARE Q7_1 Q7_4 Q7_9 Q7_10 ;
grouping is PatientCare (1 = pati 2 = others);
ANALYSIS: ESTIMATOR is MLR;
MODEL: HIVOEXT by Q7_1 Q7_4 Q7_9 Q7_10 ;
MODEL OTHERS: HIVOEXT by !Q7_1 Q7_4 Q7_9 Q7_10;
[Q7_1 Q7_4 Q7_9 Q7_10]; [HIVOEXT @0];
USEVARIABLES ARE Q7_1 Q7_4 Q7_9 Q7_10;
grouping is PatientCare (1 = pati 2 = others);
ANALYSIS: ESTIMATOR is MLR;
MODEL: HIVOEXT by Q7_1 Q7_4 Q7_9 Q7_10;
MODEL OTHERS: !HIVOEXT by !Q7_1 !Q7_4 !Q7_9 !Q7_10;
The models to test for measurement invariance for continuous outcomes are shown in the Topic 1 course handout on the website. For categorical outcomes, see the Topic 2 course handout. Both are under the topic multiple group analysis. Chapter 14 describes the models for other situations.
Dear Prof. Muthen, Thank you for your quick reply. I adjusted the syntax accordingly. Adding [HIVOEXT @0]; in the first part of the MODEL command for both configural and metric invariance, results in latent means fixed at zero for both groups. To test scalar invariance, I did not specify [HIVOEXT @0], causing the latent means to differ between groups.
This, however, feels counter-intuitive, as it seems I am freeing the means to obtain scalar invariance – whereas to my understanding the latent means should be fixed in this last model in order to obtain scalar invariance.
I am wondering whether I am mistaken. Should I only specify [HIVOEXT @0] in the scalar model and leave this specification out of the configural and metric model.
The factor means should be allowed to be different in the groups for the scalar model. It is still a much more restrictive model than metric and configural because you hold the intercepts equal across groups.
Margarita posted on Friday, September 09, 2016 - 11:47 am
Dear Dr. Muthen,
I am testing for configural vs. scalar invariance for a longitudinal model with 3-time points. I had a couple of questions, if you have the time. I also posted this to SEMNET as I was not sure if it was appropriate for this forum. If not, please ignore my post.
After freeing some of the thresholds based on MI, the chi-square difference is still significant. I was wondering
1) does the input look okay? 2) After consulting several examples I am not clear as to how one can check whether the factor loadings are invariant across groups? Should I compare the MI in the "By" section and free those that are different in one of the groups?
ANALYSIS: ESTIMATOR = WLSMV; PARAMETERIZATION = THETA; MODEL =CONFIGURAL SCALAR;
MODEL: E1 by S3_T1 S8_T1 S13_T1 S16_T1 S24_T1; E2 by S3_T2 S8_T2 S13_T2 S16_T2 S24_T2; E3 by S3_T3 S8_T3 S13_T3 S16_T3 S24_T3;
C1 by S5_T1 S7_T1 S12_T1 S18_T1 S22_T1; C2 by S5_T2 S7_T2 S12_T2 S18_T2 s22_t2; C3 by S5_T3 S7_T3 S12_T3 S18_T3 S22_T3;
S5_T2 WITH S13_T2; S24_T2 WITH S16_T2;
MODEL FEMALE: S16_T1 with S24_T1; [ S3_T1$1*]; [ S13_T2$1*]; [ S16_T2$1*]; [ S7_T2$1*]; [ S18_T2$1*];
You can impose the metric model described on page 486 of the V7 UG and then look for loadings with large MIs.
Margarita posted on Monday, September 12, 2016 - 4:31 am
Thank you for your reply. I have one last question, if you have the time.
I was reading in the UG that factor loadings and thresholds need to be freed in tandem.
Given that in metric invariance with theta parameterization only the 1st and 2nd indicator of each factor are held equal across groups, then I should free factor loadings with large MI but only thresholds that correspond to those that are held equal across groups, correct?
F1 by item1 item2 item3 item4; (assuming that item1 and item 2 are held equal)
Group 2: F1 by item1* item4*; [item1$1 $2]; item1@1;
I am attempting to test a measurement invariance ESEM model of personality disorder symptoms (80 items responded on a 5-point scale) between treatment and non-treatment seeking groups. I am using the short cut method for testing configural and scalar invariance, where I specify:
MODEL IS CONFIGURAL SCALAR;
MPlus does not test for METRIC invariance with categorical outcomes. In order to do this, I am following the example provided in 5.27 (p. 100) of the UG.
However, I'm having trouble relaxing the default equality constraint on the item thresholds. The model runs fine when I relax the first threshold of every item (e.g.,[Y1$1-Y80$1]). But when I do this for all 4 thresholds, I get the following message:
THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE COMPUTED. THE MODEL MAY NOT BE IDENTIFIED. CHECK YOUR MODEL. PROBLEM INVOLVING THE FOLLOWING PARAMETER: Parameter 1547, Group G2: F3 BY COBC4
THE CONDITION NUMBER IS -0.476D-15.
Is there an easier, alternative method for testing for metric invariance? If not, what is the correct syntax for relaxing the equality constraint on the item thresholds? Would it be, for example:
yvette xie posted on Wednesday, October 12, 2016 - 12:47 am
I'm doing a study about parental behavior using PBI scale.
I want to demonstrate the scale works equally across mother and father. But the scores of mothers' and fathers' parenting behaviors were obtained from the same children instead of two groups. In this case, can I use multigroup CFA to conduct measurement invariance test by specifying mother and father as two separate groups?
Hi, I'm testing invariance using the MLR estimator. I'd like to use the McDonald's noncentrality index to compare models, which I am able to calculate using the formula posted by marlies (on Tuesday, October 29, 2013 - 9:48 am). It would seem that I should introduce the scaling factor into that calculation, since the comparison of two scaled chi squares does not have a chi square distribution, but I cannot find documentation for how to do that. I know that mplus does not provide this index and therefore may have no opinion, but I'm grateful for any input. Thanks so much, as always!
I am testing invariance of all parameters (successively) in a CFA model between eight groups.
When constraining a particular subset of factor loadings to invariance, I get the following message: NO CONVERGENCE. SERIOUS PROBLEMS IN ITERATIONS. ESTIMATED COVARIANCE MATRIX NON-INVERTIBLE. CHECK YOUR STARTING VALUES.
What is puzzling me is that every other nested model, both less constrained and more constrained (25 models in total) converges just fine.
I have already tried using the estimates from the closest less restricted model as starting values - to no avail.
For my PhD research, I am conducting a cross-cultural research in which I am studied the gambling behaviours in two different countries. I want to test the measurement invariance of my measures between these two countries before starting to conduct cross national comparisons. My dependent variable is categorical and my independent variables are continuous. I am thinking to use the short cut method for testing configural and scalar invariance, where I specify MODEL IS CONFIGURAL SCALAR (as my outcome variable is binary and categorical). Could you please tell me if I can use this shortcut with all measures at the same time, or should I test the measurement invariance for each measure? Many thanks
Many thanks for your reply. I know that this short-cut method is used for testing measurement invariance of factors, and not observed variables. But in my case I need to test the measurement invariance in more than one factor, in one factor that is my outcome variable (composed by binary observed indicators) and in other two factors that are my independent variables. Therefore, could you please tell me if I need to write this shortcut separately for each of my factors, that is, to write this short cut firstly for my dependent variable and then write again this short-cut for each of my independent variables, or can I write this short cut for all my factors (that constitute all my variables) at the same time? Many thanks
Artur posted on Sunday, December 11, 2016 - 3:05 am
Dear Mplus Team, in BSEM multi-group invariance analyse to specify approximate invariance across all items and all groups the priors specification looks like that (example from Mplus Web Note 17 table 12, 6 items 10 groups):
MODEL PRIORS: DO(1,6) DIFF(lam1 #-lam10 #)~N(0,0.10); DO(1,6) DIFF(nu1 #-nu10 #)~N(0,0.10);
The question is how to specific the Partial BSEM (situation 4 from Web Note 17: Freeing non-invariants, BSEM V=0.10 for others, Table 8).
For instance how to free the factor loading for item 1 in group 1 (and not for other groups) and item 2 in group 2 and let say item 4 in group 5?
I could not find examples of such analysis on the website nor in manual.
I am attempting to run a MGCFA to test for measurement invariance between males and females. My indicator variables are categorical. My syntax for the configural model is this:
GROUPING = m1 (1=BOYS, 2=GIRLS);
ANALYSIS: ESTIMATOR IS WLSMV;
MODEL: PHYAB BY vip27_r* vip28_r vip29_r vip30_r; PHYAB@1 ; SEXAB BY vip31_r* vip32_r vip33_r vip34_r; SEXAB@1;
MODEL GIRLS: PHYAB BY vip27_r vip28_r vip29_r vip30_r; [vip27_r vip28_r vip29_r vip30_r]; [PHYAB@0]; SEXAB BY vip31_r vip32_r vip33_r vip34_r; vip31_r vip32_r vip33_r vip34_r]; [SEXAB@0];
However, I always get the following error message The following MODEL statements are ignored: * Statements in Group GIRLS: [ VIP27_R ] [ VIP28_R ] [ VIP29_R ] [ VIP30_R ] [ VIP31_R ] [ VIP32_R ] [ VIP33_R ] [ VIP34_R ]
The model runs for scalar invariance if I remove the [vip...] statements but I would like to test for both configural and metric invariance. What might be my problem? Thanks for your help!
Unfortunately, I now get an error message that tells me that one of my groups has 0 observations. I have checked the dataset and this isn't the case. I also get a warning message that I have five times the amount of missing values that I actually have.
Just to confirm, the default settings in Mplus for the measurement model in multi-group analysis correspond to the scalar model, is that correct? So for observed categorical dependent variables using the default Delta parameterization, this would constrain factor loadings and thresholds to be equal across groups?
Also, I read elsewhere in the discussion group that absolute fit indices cannot be compared for the WLSMV estimator. So does that mean apart from the difftest, it is not appropriate to say that the value of the CFI/TLI/RMSEA/SRMR is slightly better or improved in one model versus another (i.e., based on the numeric estimate and whether it is higher or lower in one model compared to another)?
The metric model is not allowed for ordered categorical (ordinal) variables when a factor indicator loads on more than one factor, when the metric of a factor is set by fixing a factor variance to one, and when Exploratory Structural Equation Modeling (ESEM) is used. p. 486
Why can's we assess the measurement invariance metric model for a measurement model with ordinal indicator variables and cross-loading items?
What conclusions can we draw about measurement models with cross-loading items?
We don't really recommend the metric model for ordinal variables but instead recommend going straight to the scalar model - and then you can consider invariance also for cross-loadings. For more on the ordinal case, see Roger Millsap's measurement book which goes through various cases.
Ashley posted on Saturday, April 15, 2017 - 9:16 pm
I'm trying to test measurement invariance of a multiple group confirmatory factor analysis clustered by community. I am looking at whether my model varies across different groups (e.g., urban, rural). My model is set up as follows:
Testing for measurement invariance across groups for continuous items is shown in the Topic 1 course video and handout under multiple group analysis. For categorical items, see the Topic 2 course video and handout.
I have reviewed the video and handout and found them to be very helpful for setting up my model; however, especially given that I'm working with imputed datasets, I'm confused as to how I can interpret the findings. More specifically, how do I know if the invariance is significant or if my model needs to be adjusted?
Thank you. Is there any type of similar tests that I could do in Mplus using Type=Imputation? If possible, I want to test my model with the imputed datasets given that all other analyses use imputed datasets.
I am currently testing the measurement invariance of a scale in two different samples. However, when I checked the measurement model of this scale in the total sample,(before starting to examine its measurement invariance between the two samples), I added two correlations, which correspond to two correlations between two factor indicators in order to increase the model fit. Therefore, could you please tell me how can I insert these two correlations in the syntax when checking the invariance of this scale between these two samples? Should I write the following syntax:
Analysis: MODEL IS CONFIGURAL, METRIC, SCALAR
Model: SK by Senk1 Senk2 Senk3 Senk4 Senk6 Senk7 Senk8
Senk2 WITH Senk1;
Senk7 WITH Senk1;
Or should I fix these correlations to O? What do you suggest me to do? Many thanks for all your help,
Ashley - you can use Model Test to test that measurement parameters are equal.
Ashley posted on Saturday, June 10, 2017 - 11:14 am
I ran my cfa's separately for each group (urban/rural). I now would like to see if the factor loading for each group correlate (e.g., does loading 1 of factor 1 of the urban analysis correlate with loading 1 of factor 1 of the rural analysis). Is there a code to do this in mplus?
I'm testing configural invariance across two groups using input from the Topic 1 course handout (pg. 212). However, the code will not run. Below is the input and error. Can you tell me what I am doing wrong?
usevariables are univedu workpay sellprop finanind wkouthm decwkout decmoney dleavehm dfoodeat dwrkpreg drestprg fomhosp fommovie fomrest fomcoffe fommall fomfriend fomparks; grouping is nationality (1=QT 2=NQ); Missing are all (-9999) ; Model: handr by univedu-wkouthm; [handr@0]; decision by decwkout-drestprg; [decision@0]; fom by fomhosp-fomparks; [fom@0];
I am testing measurement invariance with a three factor model using categorical items. I tested the metric invariance model and have identified a model with partial metric invariance. However, when I remove the lines of code that overwrite the intercept invariance default and use the difftest command to test this I get an error message that the models are not nested. All I have done is remove these lines of code - otherwise the models are the same. I also tried it the other way around and that produces the same error. Can you tell me what I am doing wrong?
I'm not clear on whether the run of your first paragraph is on the total sample (putting the 2 groups together), or if it is 2 analyses, one for each group. If it is the latter, the outcome is strange. You could send an example to Support.
Lois Downey posted on Wednesday, October 04, 2017 - 6:35 pm
The run noted in my first paragraph is on the total sample (combining the 2 groups).
Lois Downey posted on Thursday, October 05, 2017 - 7:30 am
However, I've now also run the model for two separate groups, and it is still the case that I get significant misfit for the separate groups, but non-significant misfit for the 2-group model. I will send an example to Support, as you have suggested.
Joao Garcez posted on Sunday, November 19, 2017 - 4:28 am
Dear Drs Linda & Bengt Muthen,
Good morning. I'm testing the longitudinal invariance of a measure but since n>2800 at both T1 and T2 I am concerned the chi-square test of difference will be impacted so as to provide significant results irrespective of actual invariance (Kang et al., 2015). I considered using McDonald's NCI formula to compare configural, metric and scalar models and bypass the influence of sample size. However, when I used the chi-square model fit and CFI with WLSMV, I get a better fit for scalar than for the configural model, which if I understood correctly is something that should not be happening. In previous threads you warned that when resorting to WLSMV, the values of the chi-square model fit/CFI should not be used and only the chi-square of difference should be considered. Hence:
1 - Would it be correct to assume that the values of CFI and chi-square model of fit as calculated via WLSMV cannot be used for the GFI comparisons suggested by Kang et al. (2015)? Is there a way to use the GFI's comparisons with WLSMV estimates?
2 - Is there an alternative that you'd suggest that still accounts for group size?
3 - In your guide you suggest freeing thresholds/loadings in tandem when doing Partial MI. Does this mean I should also constrain in tandem and skip metric model invariance and just do configural vs scalar?
I would look at modification indices for the scalar model and see which parameters need to be non-invariant. If the non-invariance is substantively small I would ignore the misfit judged by chi-square because it can be deemed "over-sensitive" due to a large sample (but N=2800 isn't that large with categorical outcomes).
I don't think GFI can be done using WLSMV.
You may also want to ask on SEMNET.
Joao Garcez posted on Sunday, November 19, 2017 - 2:15 pm
Dear Dr. Bengt Muthen,
Thank you for your reply, I really appreciate it. If I may ask, in your opinion is there a standard that I should be considering as "substantively small non-invariance"? Furthermore, is the answer to question 3 something that I should also enquire about in SEMNET? Thank you once again,
Q2: I would just to scalar vs configural and skip metric.
Joao Garcez posted on Monday, November 20, 2017 - 11:25 pm
Dear Dr. Muthen,
Louise Black posted on Thursday, November 23, 2017 - 3:49 am
Dear Drs Muthen,
I am working with a bifactor model with 15 categorical and 4 continuous indicators- the categorical items are from a scale that breaks into 2 residualised factors, while the continuous make up 1 further residualised specific factor. I am using WLSMV and now want to test for invariance, so I have a few questions if you have the time: 1. I assume I should use a four-step (baseline, configural, metric, scalar) approach since continous items are involved, or should I skip metric as you suggest above? 2. Doing the four-step I find metric but not scalar invariance and I am unclear how to proceed to test for partial MI here. I presume I should look at modification indices, but should I free loadings with thresholds in tandem as you suggest in your previous posts and UG but only intercepts for the continuous items, or just thresholds and intercepts individually? 3. If I should free loadings alongside thresholds (or intercepts), would I free the loadings on both the general and specific factors of the bifactor model? 4. Finally, would you be able to provide any additional insight into why thresholds are more related to the item probability curve than intercepts?
Settling for metric vs scalar invariance depends on what the model will be used for. If the use is to only compare say factor variances, metric is sufficient. But if the use is to compare factor means, scalar is needed.
I would change both thresholds/intercepts and loadings. And for both general and specific factors. But these are general analysis strategies better discussed on SEMNET.
Peter McEvoy posted on Wednesday, November 29, 2017 - 8:11 pm
Dear Drs Muthen,
We are testing measurement invariance using MODEL = CONFIGURAL METRIC SCALAR for a simple single factor model at one time-point across three groups (with different primary mental disorders).
The output suggests no sig difference when comparing metric against configural, but scalar against configural and scalar against metric are both sig (ps .01 and .002 respectively).
We now want to locate where exactly the invariance lies. We've requested "modindices(all)" in the OUTPUT line to help us identify sources of strain. However, we receive the following warning message, which suggests that we cannot use modification indices for this purpose using this model.
"MODINDICES option is not available when performing measurement invariance testing with multiple models with the MODEL option of the ANALYSIS command. Request for MODINDICES is ignored."
Can you please advise the next step? Do we need to write out the code without the MODEL = CONFIGURAL METRIC SCALAR code before we can move forward? If so, is the purpose of this MODEL command just to have an initial quick run to see if you need to go further with the full code to identify sources of strain?
I am performing a CFA with 11 ordinal indicators with three continuous factors. Within the sample, I have two groups (Household Head and Women). I am trying to assess measurement invariance across the group by running a unconstrain and then a constraint model to perform the log-likelihood test to confirm the invariance.Due to ordinal variables, the default estimator of the CFA is WLSMV which does not produce the log-likelihood function. I tried to specify Estimator = MLR and receiving the following error: "ERROR in ANALYSIS command: ALGORITHM=INTEGRATION is not available for multiple group analysis. Try using the KNOWNCLASS option for TYPE=MIXTURE". Is there any way to use MLR estimate so that I can perform Log-Likelihood test to confirm Measurement Invariance.
VARIABLE: NAMES ARE modulename GM GPTN GMB CA CAD SSB SSL TRUSTL TRUSTN TRUSTS TRUSTAD SCR SCSB; USEVARIABLES = GMB CA CAD SSB SSL TRUSTL TRUSTN TRUSTS TRUSTAD SCR SCSB; CATEGORICAL = GMB CA CAD SSB SSL TRUSTL TRUSTN TRUSTS TRUSTAD SCR SCSB; Grouping is modulename (1 = HoH 2 = Women); Analysis: MODEL = NOMEANSTRUCTURE; INFORMATION = EXPECTED; MODEL: f1 BY GMB CA CAD; f2 BY SSB SSL; f3 BY TRUSTL TRUSTN TRUSTS TRUSTAD SCR SCSB; Model Women: f1 BY GMB CA CAD; f2 BY SSB SSL; f3 BY TRUSTL TRUSTN TRUSTS TRUSTAD SCR SCSB;
For ML you need to use KNOWNCLASS. See the UG for how to do that.
You can also use DIFFTEST for WLSMV. Again, see UG.
Youhua Wei posted on Friday, May 11, 2018 - 8:42 am
I'm trying to run Monte Carlo simulation to check the parameter estimation in the alignment (btw males and females) for a test with 99 binary items. Here is the code:
MONTECARLO: NAMES = l1-l99; NGROUPS = 2; NOBSERVATIONS = 2(5000); NREPS = 10; GENERATE = l1-l99(1); CATEGORICAL = l1-l99; ANALYSIS: TYPE=MIXTURE; ESTIMATOR = ML; ALIGNMENT = FIXED; PROCESSORS = 8; ALGORITHM = INTEGRATION; MODEL POPULATION: %OVERALL% f by l1-l99*1; %g#1% f BY l1*1.12311; ......... [ f*0]; [ l1$1*-1.91102 ]; ......... f*1; %g#2% f BY l1*0.87657; ......... [ f*0.98557 ]; [ l1$1*-2.01662 ]; ......... f*1.27121;
In the MODEL RESULTS, the population parameters are either -1 (for class 1) or 1 (for class 2) for all thresholds (compared with est avg, se, et al); and no comparison for loadings. Any problems with my coding?
I am trying to asses measurement invariance between two groups in a three factor model that has a combination of continuous, binary, and categorical ordinal indicators. Specifically, one factor has all continuous indicators, the second factor has continuous indicators plus one binary indicator, and the the factor has continuous, binary, and ordinal indicators. Am I correct in my understanding that I can only asses the configural and scalar models in this case? From the user guide, it is clear that this is true when all indicators are binary, but it is not clear if it applies when indicators are mixed continuous/binary. Thank you!
I am using MGCFA to examine measurement non-invariance across sex of a personality questionnaire. After global tests, I test each item separately to try to find the problematic items. I would like to report the effect sizes on the item's factor loadings and thresholds. Is there are way I can compute this from the output?
Tom Bailey posted on Saturday, December 22, 2018 - 3:06 am
I'm getting slightly different results for my metric model when I run the 'new' overall CONFIGURAL METRIC SCALAR syntax and when I look at the individual models myself, just wondering why that might be.
For the overall option
MODEL: POSGAI BY RPGS1* RPGS2 RPGS3 RPGS4 RPGS5 RPGS6 RPGS7; RPGS4 WITH RPGS5; POSGAI@1; [POSGAI@0];
When I run metric I go (for each of 3 groups)
Model gr1: [RPGS1-RPGS7]; RPGS4 WITH RPGS5;
Tom Bailey posted on Saturday, December 22, 2018 - 3:08 am
The two methods were the same for configural btw (with the BY statement in each group as well to allow loadings to differ as well), just not the same for metric invariance.
You can compare the two outputs using TECH1 or comparing the results to see where the difference is.
Ti Zhang posted on Wednesday, January 02, 2019 - 10:02 pm
Hi, Dr. Muthen, I am trying to understand how thresholds change would affect the latent mean difference estimations across 2 groups under configural invariance model using Monte Carlo command. What I have found is that under data generation model, when I changed the first item's threshold in one group and all other parameters remain the same across groups, the latent mean difference estimations are very biased (given the population parameter value for mean difference has been decided, like 0,2, 0.5). When I changed the other items' thresholds, however, the latent mean differences do not have bias (pretty close to the population value). I am wondering why the first item is so special. Would the first item is developed in some ways in Mplus so the difference exist? Thank you.
Send the 2 outputs showing bias and no bias to Support along with your license number.
Olev Must posted on Monday, August 26, 2019 - 3:11 am
I am conducting the invariance testing (binary data, WLSMV). In the process of freeing thresholds I got the following message:
THE MODEL ESTIMATION TERMINATED NORMALLY.THE CHI-SQUARE DIFFERENCE TEST COULD NOT BE COMPUTED BECAUSE THE H0 MODEL MAY NOT BE NESTED IN THE H1 MODEL. DECREASING THE CONVERGENCE OPTION MAY RESOLVE THIS PROBLEM. THE OPTIMAL FIT FUNCTION VALUE FOR THE H0 MODEL IS SMALLER THAN THE OPTIMAL FIT FUNCTION VALUE FOR THE H1 MODEL. THE FIT FUNCTION VALUE FOR THE H0 MODEL IS 0.0049894 THE FIT FUNCTION VALUE FOR THE H1 MODEL IS 0.0054055 VERIFY THAT THE MODELS ARE NESTED USING THE NESTED OPTION.
Please suggestions how I must continue. Nesting is correct, it worked in previous steps.
You can try the Nested option but from the output message it is clear that the H1 model fits worse than H0 which should not happen for nested models. Check that you have set up the two models correctly. You say that "previous steps" have shown nestedness - check what's different here.
If this doesn't help, send your relevant outputs to Support along with your license number.
We have a doubt analysing measurement invariance with longitudinal data. We understood that using wide-data format is an efficient way of testing invariance across time. Our doubt remains about the interpretation of the global fit indices of our final model. Chi-square, CFI/TLI, etc. values differ with long-data multigroup compared to wide-data. Does make sense to interpret the absolute CFI/TLI values with longitudinal wide-data format?
Yes, we know that the parameters estimates are the same regardless of the data format (long or wide). But our questions relates only to the global fit indices (Chi-square, CFI, TLI, etc). In our example, parameter estimates are the same for both cases, but global fit indices are not. The wide data analysis has many more degrees of freedom, because it is based on a larger number of variables (thus, the correlation matrix has different structure in both methods).
For the wide data format analysis, we get Chi-Square Value = 14107.053 Degrees of Freedom = 1326 CFI = 0.574 TLI = 0.590
For the long data format (multigroup) analysis, we get Chi-Square Value = 3150.178* Degrees of Freedom = 459 CFI = 0.910 TLI = 0.920
For practical reasons we will do our analyses in wide format, but we feel that global fit indices are misleading. Would it be possible with wide format to obtain a Chi-square that ignores the across-time/group item correlations?
The global fit indices for the wide run naturally also tests the suitability of invariance across time so the 2 tests answer different questions. Regarding your last question - why not use the test from the long run.
Dear Muthens, In our model not all first-order factors are underneath a second-order factor. Instead we have 3 first-order factors under the second-order factor and 2 other separate first-order factors.
F1 by TR1 TR2 TR3; F2 by TR4 TR7 TR8; F3 by TR5 TR6 TR15; F4 by TR9 TR10 TR11; F5 by TR12 TR13 TR14; SOF by F2 F4 F5;
We have performed the first 3 models of measurement invariance, but get stuck at the 4th & 5th. 1) Configural inv. - no constraints 2) Metric 1st order inv. - factor loadings of first-order factors constrained to be equal across groups 3) Metric 2nd o. inv. - As (2) PLUS factor loadings of second-order factor constrained (equal) 4) Scalar 1st o. inv. - As (3) PLUS intercepts of observed variables constrained (equal) & latent means of first-order factors estimated in the second group Does this mean estimating all first-order latent means (F1, F2, F3, F4 & F5)? Or only estimating the latent means of the first-order factors under the second-order factor (F2, F4 & F5)? 5) Scalar 2nd o. inv. - As (3) PLUS intercepts of observed variables constrained (equal), first-order factor intercepts @0 in both groups & estimate second-order factor mean Again, do we constrain all first-order factor intercepts or only the first-order factors underneath the second-order factor?
Hi, I am examining MI across 14 groups and 8 factors. The model output shows acceptable configured and metric invariance but the scalar invariance is significantly lower. I have examined the intercepts from the metric model but they seem to vary greatly so it does not seem like freeing intercepts would be helpful. I am considering using the alignment method but I am not sure what syntax should be used.
I should add that my main goal is to be able to compare means across the 14 groups and 8 sub scales - that is why scalar invariance is being explored.
Is the alignment method the best option if the intercepts seem to vary significantly or do you have another suggestion.
I am running a MI model with categorical variable using 8 sub scales and 13 groups. The output gives me the configural and scale invariance but says that the metric model could not converge. Is there a reason this might happen?
I also have errors such as these for the configural and scalar model (though model fit indices are good)
WARNING: THE LATENT VARIABLE COVARIANCE MATRIX (PSI) IN GROUP USA IS NOT POSITIVE DEFINITE. THIS COULD INDICATE A NEGATIVE VARIANCE/ RESIDUAL VARIANCE FOR A LATENT VARIABLE, A CORRELATION GREATER OR EQUAL TO ONE BETWEEN TWO LATENT VARIABLES, OR A LINEAR DEPENDENCY AMONG MORE THAN TWO LATENT VARIABLES. CHECK THE TECH4 OUTPUT FOR MORE INFORMATION. PROBLEM INVOLVING VARIABLE PEER.