Hi there, I am attempting to produce a final model which takes into account measurement invariance I have found by sex. We are working with a two factor solution, and when we regress the two factors on sex we find significant paths for both factors. Using the modification indices, we then identified several observed variables (loading on factor 2) which show significant paths to sex. When these paths from observed variables to sex were included in the model, the relationship between factor 2 and sex became non-significant. We then took out the path between factor 2 and sex. However, despite the fact that this path was not significant, it had a dramatic effect of the chi-square value, suggesting that the path needed to be in the model. We then added the path back in, but set it to zero. This fixed the chi-square problem. We are unsure as to why this would happen. Why are we getting such dramatically different chi-square values if the paths we remove are non-significant and why does including a path that is set to 0 have such a dramatic effect on the chi-square value? Thank you so much for you time, Kaja
I am attempting to compare nested measurement models to test for measurement invariance. I attempted to freely estimate the models paramters for the "female" sample by using the following command:
MODEL: support BY facesco@1 facesad; combat WITH support;
MODEL female: support BY facesco@1 facesad; combat WITH support;
However, I am receiving the following warning:THE RESIDUAL COVARIANCE MATRIX (THETA)IN GROUP MALE IS NOT POSITIVE DEFINITE. THIS COULD INDICATE A NEGATIVE VARIANCE/RESIDUAL VARIANCE FOR AN OBSERVED VARIABLE, A CORRELATION GREATER OR EQUAL TO ONE BETWEEN TWO OBSERVED VARIABLES, OR A LINEAR DEPENDENCY AMONG MORE THAN TWO OBSERVED VARIABLES. CHECK THE RESULTS SECTION FOR MORE INFORMATION.PROBLEM INVOLVING VARIABLE FACESCO.
Am I specifying the variant model incorrectly, or does this warning relate some idiosyncracy about the male sample in my dataset? Thank you.
There seems to be a problem with the variable facesco. Does this variable have a negative residual variance in the male group? This is usually the problem.
Xuan Huang posted on Wednesday, May 16, 2007 - 10:03 am
Dear professors: Could you give us some suggestions on testing measurement invariance in Mplus? We want to test whether parenting measures are equivalent across mothers and fathers. Because the mother and the father are from the same family unit, the two groups in comparison are not independent.
Can we take care of non-independence across groups in multi-level, multi group CFA in MPLus? Thanks a lot in advance.
You can do this by taking a multivariate approach where each observation has data for both mothers and fathers. You would then have factors for mothers and factors for fathers and you would place equalities on the measurement parameters to test for measurement invariance. See Example 6.14 which is a growth model and just imagine it without the growth component.
I have a related question. How should one test for measurement invariance of a scale across two groups (defined at Level 1, e.g., male/female) when L1 units are nested in L2 units (e.g., class)?
I think that a multiple-group approach would not be ideal because male students are not independent from female students in the same class - and, to my knowledge, grouping=gender, type=complex, & cluster=class, would only adjust for dependence within each gender group, but not across gender groups. Is this correct?
I also don't think that a multivariate approach would work like it does for the above mother/father scenario. Each observation at the class level would have multiple males and females as opposed to typically a single mother and father at the family level. Maybe aggregrating needs to be done?
I used Mplus to test a multiple-group SEM model with complex data (clustering and stratification). I used type=complex to address the complex nature of the data - but am wondering how dependence of units across groups is handled by Mplus and whether my tests of structural invariance constraints, which I think assume the groups are independent, are biased?
It is true that if clusters contain both males and females, the males and females are not independent groups. With TYPE=COMPLEX in Mplus, an adjustment has been made to take this lack of independence into account.
So you are saying that Mplus (with type=complex) not only accounts for cluster-based dependence within groups (e.g., males and females), but also cluster-based dependence between gender groups in a multiple group analysis?
Then with Xuan Huang's situation involving mothers and fathers above, is it appropriate (as an alternative to the multivariate approach) to use a multiple group model (group = parent) with type=cluster - which would typically result in a single observation for each cluster within group? If so, should this lead to equivalent results with the multivariate approach you suggested?
Hi Linda and/or Bengt, I am having a problem testing for invariance of a second-order CFA with 3 first-order factors and one second-order factor using robust estimation. Using ordinary ML, we encounter no problems though the CFI is lower than what one would like so we wanted to see what the Robust estimate of CFI looked like. When we run the same exact invariance model (across whites vs minorities) we receive a error message saying that the model is not identified due to a problem involving parameter 77. Parameter 77 is in the value in the Alpha vector for the second-order factor in the minority group. Let me know if you need me to send you our input file (and data file). Thanks! Rick Zinbarg
Todd Little and his colleagues propose the "effects coding" method to identify MACS models in various papers. This method involves constraining the factor loadings to average 1 in each group (for each factor) and the intercepts to sum to 0 in each groups (for each factor again)Everything else is freely estimated.
Is it possible to implement this in Mplus (I believe so) ? If it is, how would you implement it in Mplus ?
Got it! thanks! More simple than I thought (If I'm right - if not correct me). For a single group, that would give: MODEL: f1 BY y1* (c1) y2 (c2) y3 (c3); f2 BY y4* (c4) y5 (c5) y6 (c6); [y1] (c7); [y2] (c8); [y3] (c9); [y4] (c10); [y5] (c11); [y6] (c12); [F1 F2];
Yes, sorry I did not specify it. It does run correctly and provide fit indices equal to those obtianed under different constraints (marker variables, latent standardization). I was just wondering if t could be simplified. For instance, I tried f1 BY y1* y2 y3 (c1-c3); and it did not work (told me I had more constraints than variables). But this way, everything is alright.
Thank you very much Linda! For those who followed this discussion, the previous input can thus be simplified to (and it work): MODEL: f1 BY y1* (c1) y2-y3 (c2-c3) ; f2 BY y4* (c4) y5-y6 (c5-c6); [y1-y6] (c7-c12); [F1 F2];
I have recently done some multi-group confirmatory factor analysis models to test for measurement invariance across three groups. I tested for factor loadings, intercepts, and residual variances, but while invariance held for the first two, it did not for residual variances.
I know that measurement invariance requires the three to hold, but what does the above mean in terms of the interpretability of the estimates? Because the intercept and the loadings are equal, can the estimates be compared across groups? Is it just that the precision is different? What limits does this pose on comparative analyses?
Dear Mr and/or Ms Muthén, I am having a problem checking for measurement invariance of Demand Control Questionnaire in hospital workers of Brazil and Sweden using multiple group analysis. When I performed Confirmatory Factor Analysis for each country separately, using WLSMV estimator for categorical variables, I found that the best fit model was with 3 factors: D1 by i1–i5, D2 by i6-i8 and D3 by i9-i10) for both countries, but with different crossloadings (Brazil: D1-i6 and Sweden: D1-i8). Is it possible to proceed with multiple group analysis? Do these models have equal factorial structure? When I tried multiple group analysis, not considering the crossloadings, first of all, I fixed the highest loadings of each dimension in 1. After that, I used the default of Mplus (the loading of the first item of each dimension). However, using this procedure I couldn’t check equal loadings for the items I have fixed in 1, so I repeated the procedure fixing each factor variance in 1, but the results were totally different. Was it correct? What procedure should I use? And why do the results differ? Thanks in advance, Yara
If you have one factor loading fixed to one and the factor variance free or the factor variance fixed to one and all factor loadings free, you should get the same chi-square value. If you do not, please send your full outputs and license number to email@example.com.
Thanks, Linda. The chi-square value are very similar, but not equal. I will send you the outputs. Is it correct to fix the factor variance in 1 to check equal loadings? And, how about the factorial structure? Do you think I should proceed? Thanks again, Yara
Thank you for your reply. But, will it be possible to estimate response shift using Oort or Schmitt's approach when controlling for covariates (e.g. comorbidities)? Also, I have data for two time points.
Do you recommend to treat time points as multiple groups? This is my unrestricted model which has 6 latent variables (times 1 to 6)
model: sns1 by lxscl07 lxscl16 lxscl35 lxscl62 ; sns2 by lzscl07 lzscl16 lzscl35 lzscl62 ; sns3 by lqscl07 lqscl16 lqscl35 lqscl62 ; sns4 by lvscl07 lvscl16 lvscl35 lvscl62 ; sns5 by lwscl07 lwscl16 lwscl35 lwscl62 ; sns6 by lbscl07 lbscl16 lbscl35 lbscl62 ;
in a next model I added residual covariances, then tested altered items to be fixed. Then I constrained factor loadings to be invariant over time. But for a test of strong factorial invariance I have to fix the intercepts to be invariant over time...How can I do this?
Hi, I am running into a puzzling result when trying to compare configural versus metric invariance of factor loadings across groups. I have a model that is essentially configurally invariant (I had to constrain a few loadings to be equal across groups to prevent some Heywood cases) and am comparing it with a metric invariant model in which all the loadings are constrained to be equal across groups. My understanding was the configural invariant model could not have a larger chi-square than the metric invariant model but this is precisely the result I am getting. Is my undestanding incorrect or does this seem odd to you too? Thanks! Rick Zinbarg
I am doing a multigroup invariance test. When I constrained parameters (factor loadings or residual variances or factor correlations or all three together), only the unstandardized parameters are constrained (having equal values for the two groups), whereas standardized parameters remain different in values Am I doing the right thing?
The standardized coefficients will be different even when the unstandardized coefficients are equal because the standardization is done using the standard deviations for each group not the overall standard deviations.
I am working on cross validating a Second Order CFA with MPlus. I was wondering if you would be able to guide me through this process. Do you have any suggestions on the preferred order for imposing constraints on a 2nd order CFA when you start with a fully non-constrained model (intercepts & loadings free across groups – but means & intercepts of latents set to zero for identification purposes).
I would test the first order factors first using the strategy shown in the Topic 1 course handout on the website. See multiple group analysis. Once measurement invariance is established for the first-order factors, I would test if for the second-order factor.
I have tested for measurement variance among the first order factors. They cross validated well now I am planning to do the second order CFA. Are the steps in measurement invariance similar to that in first order CFA? I believe the means of the first order should be set to 0, is that right?
Step 1- fully non-constrained model Step 2- constrain factor loadings step 3- constrain intercepts and loadings step 4- constrain intercepts, loadings, and residual variances step 5- constrain intercepts, loadings, and residual variances and error variances step 6- constrain intercepts, loadings, and residual variances, error variances, and covariances
Dear Prof. Muthen, I am running MGCFA with a four factor-model where two factors have only one indicator. I have already checked for the metric equivalence and got an acceptable model fit. In the next step by checking for the scalar invariance, the modification indices suggest to free the intercepts of two indicators (y8, y9), both loading on the same factor. If I do so I get this error message „THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE COMPUTED. THE MODEL MAY NOT BE IDENTIFIED. CHECK YOUR MODEL. PROBLEM INVOLVING PARAMETER 48“. The parameter 48 is the alpha-value for my latent construct perform. Would you have any suggestions how I can solve this problem?
Here is my syntax: MODEL: trust BY y1 y2 y3 y4 y5; y2 with y3; y1 with y2;
Hi, I have a multigroup CFA model with UVI identification. I first estimated loadings of items freely in each group, then constrained them to be the same across the two groups, to test for support for measurement invariance. So, in the STANDARDIZED solution for the constrained model, all items have the same loading except for the first item for each factor, which shows up as 1.00 in the first group, with no significance associated with it; but as a real estimate with a significance associated with it for the second group.
My question is: Why is the first item per factor set at 1.000 for the first group in the STANDARDIZED solution for the constrained model, especially if I used UVI and not ULI identification? For example, the standardized solution for the constrained model shows the following.
Actually, I think I figured out the answer to my own question, so you can ignore the previous post! I think the answer is that even when constraining the loadings to be the same across the groups, I still need to have a * (an asterisk) for the first loading for each factor, in order for UVI and not ULI identification to be used!
anonymous posted on Wednesday, March 21, 2012 - 9:30 am
I am testing measurement invariance of a measurement model across two different age groups. However, when I restrict the factor loadings to be invariant across groups using stratified, weighted, and clustered data (WLSMV estimator) with categorical ordinal response scales, I receive the following error: THE MODEL ESTIMATION TERMINATED NORMALLY THE CHI-SQUARE COMPUTATION COULD NOT BE COMPLETED BECAUSE OF A SINGULAR MATRIX.
THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE COMPUTED. THE MODEL MAY NOT BE IDENTIFIED. CHECK YOUR MODEL. PROBLEM INVOLVING PARAMETER 136.
THE CONDITION NUMBER IS -0.376D-15.
In addition, I noticed that I obtain somewhat different fit index estimates depending on whether I use Mplus 5.2 or 6.1. Any reason?
When intercepts are free, factor means must be fixed at zero. See the Topic 1 course handout under Multiple Group. All of the inputs for testing measurement invariance are given.
Hans Leto posted on Monday, April 23, 2012 - 11:44 am
I am having problems testing measurement invariance with 2nd order factors. I am following the procedure described in the handout number 1. I do not know how to include a 2nd order factor in the example described in the slide 210.
Could you provide me more guidance. I describe an example (F3 is the 2nd order factor):
Model: f1 by y1-y5; f2 by y6-y10; F3 by f1 f2; [f1-f2@0]
Model g2: f1 by y2-y5; f2 by y7-y10; F3 by f1 f2; [f1-f2@0] [y1-y10]
It isn't clear if you get a syntax error or a modeling error. I'll address both.
You may get a syntax error by your statements
[f1-f3@0] !just 1st factor of the 2nd order fixed to 0 [y1-y20]
because you don't end them with semi colons. On the other hand, what you are posting may not be what you use in your run.
You will get a non-identification error because your second group frees up the intercepts for your y's which means that the factor mean difference in the second-order factor cannot be identified. Leave out the statement
and the default will give you the correct equality across groups of these intercepts.
Hans Leto posted on Tuesday, April 24, 2012 - 2:33 am
Thank you for your answer. But it does not work, is not a syntax error (sorry). It is an error about "THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE COMPUTED."
I am quite new in testing invariance. My problem is specifying the 2nd order factors, because I already tested it only for 1st factors and ran perfectly.
I left out the [y1 y20], but it did not work.
My questions would be:
1. Do I have to fix to 0 the 2nd order factors in the general model or only the 1st order(in my example I only fixed to 0 the 1st order factors)?
2. In the specific group (g2)do I have to fix to 0 the 1st order factors (f1-f3) of my 2nd order?
3. Do I not have to free up the intercepts for the items in g2([y1-y20])?.
I have tried all these but still gives me the same error. I do not know what I am missing.
a quick question re: testing for configural invariance (equal form) that may have a simple explanation: when I test for equal form with both groups, my df and chi square do not equal the sum of the df and chi-square when I test each group separately (using USEOBSERVATIONS). As an example my df for both groups is 13, yet my df in the model for equal form = 31, not 26. Is this due to a mis-specification somewhere on my part? As an aside, rather than use a marker indicator I am fixing variances to 1 and I was wandering if this might make a difference though I doubt it. Group sizes are 131 and 128.
I'm hoping to double check that I'm using the correct code for testing measurement invariance of some scales across different racial/ethnic groups. I've been using the video and handouts from Topic 1, but I'm trying to test invariance across 3 groups instead of 2. Would you mind confirming that I've got the correct code for the second test-- without invariance?
Model: BE by m_a3 m_a31 m_a28 m_b50; EE by m_a62; m_a62@0; CE by m_a72 m_a74 m_a79 m_a80; [BE@0 EE@ CE@0];
Model AfAm: BE by m_a31 m_a28 m_b50; EE by ; CE by m_a74 m_a79 m_a80; [m_a3-m_a80];
Model Latino: BE by m_a31 m_a28 m_b50; EE by ; CE by m_a74 m_a79 m_a80; [m_a3-m_a80];
I have a question concerning measurement invariance, specifically when testing for equal latent variances and latent means (i.e., population heterogeneity). If one chooses to freely estimate all indicators in baseline models (and ID'ing the model by setting the latent variances to 1), is it the case that one must use a separate baseline model in which marker indicators at 1 and variances are freely estimated if one wants to subsequently test for population heterogeneity (i.e., the baseline model must have variances freely estimated to subsequently test for equal latent variance(s))?
I think you are asking if it matters whether you set the metric of the factors by having a loading fixed at one or the factor variance fixed at one when you later compare structural parameters. If you fix the factor variances to one, you need to do this in only one group so the test of whether the factor variances are different across groups is a test of factor variances one in one group and free in the others verus factor variances one in all groups.
I will try to articulate my question more clearly:
In my case I have a 2-factor model. I identify the model by setting the factor variances to 1 rather than the marker indicators for both groups. In other words, in the equal form solution my variances are already fixed to 1 in both groups (to ID the model) so no meaningful comparison could be made via the chi-square test of model fit for the subsequent test of invariant factor variances across groups, right? So to test for factor invariance, I would use a baseline model instead where I ID'd the model by using marker indicators rather than variances?
In multiple group analysis, you need to fix the factor variances to one in only one group. They can be free in the other groups. A meaningful test of whether the variances differ across groups is a test of factor variances one in one group and free in the others verus factor variances one in all groups.
Hi, I am working on a multi-group CFA for testing measurement invariance across 5 samples. The hypothesized model is a two order factor model. Aiming at testing metric invariance, the following syntax failed to work. What is the problem?
Thank you for your advice.
DATA: FILE IS data.prn;
VARIABLE: NAMES ARE e1-e3, m1-m4,d4-d5,d9, g; USEVARIABLE ARE e1-e3, m1-m4,d4-d5,d9, g;
When the intercepts are free, all factors means must be zero. The mean of g is not fixed at zero.
Fred Danner posted on Friday, February 15, 2013 - 11:33 am
Hi, I am testing second-order measurement invariance, using MLR estimation. Unconstrained model gives reasonable results. Model constraining factor loadings runs fine but cuts the N in each group in half! Why??
UNCONSTRAINED Model: f1 by x1 - x6; f2 by x7 - x10; f3 by x11 - x13; f4 by f1 f2 f3; [f1 - f3 @0]; Model g2: f1 by x2 - x6; f2 by x8 - x10; f3 by x12 - x13; [x1 - x13 f1 - f4 @0];
FACTOR LOADINGS CONSTRAINED Model: f1 by x1 - x6; f2 by x7 - x10; f3 by x11 - x13; f4 by f1 f2 f3; [f1 - f3 @0]; Model g2: [x1 - x13 f1 - f4 @0];
Dear Drs. Muthen, I have two questions about using TYPE=CLUSTER in data sets that have repeated observations of the same individuals.
1) In one data set, I have measures at two time points, six years apart. I am entering data into MPLUS in the long format and specifying my DV as a latent variable, which is regressed on age. I use TYPE=COMPLEX and cluster on subject ID. Is there a name for this sort of analysis?
2) In another data set with 1 to 7 repeated measures of the same individuals, I wanted to compare age groups' (adolescent vs. adult) means on a given variable, even though they are the same individuals. I ran a simple regression with the age groups entered as dummy variables. Again, I imported the data in long format and used TYPE=CLUSTER to cluster on subject ID. Is there any reason that it would be incorrect to draw inferences about the mean differences between the age groups based on this regression?
1. I know of no special name for this model. It is a latent variable model.
2. This sounds okay.
Tom Booth posted on Saturday, March 09, 2013 - 10:42 am
I am trying to fit a second-order invariance model with categorical indicators using the delta method for 2 groups. I was interested in following the suggestion of Chen, Sousa and West (2005) and testing invariance in the following order;
1: Configural 2: 1st order metric (loadings) 3: 2nd order metric (loadings) 4: 1st order scalar (thresholds) 5: 2nd order scalar (intercepts)
Where the following constraints are used across groups in each model:
1: First and second order loadings free in both groups (first item/factor loadings fixed to identify). Item thresholds free in both groups. First and second order factor means fixed to 0 in both groups. Scale factors fixed at 1 in both groups.
2: As (1) but with first order loadings constrained equal.
3: As (2) but with second order loadings constrained equal.
4: As (3) but with item thresholds constrained equal, first order factor means free in group 2, and scale factors free in group 2.
5: As (4) but with second order factor mean free in group 2 and first order factor means constrained equal.
I am not sure if this sequence is correct and after noting discussion here and notes on the Mplus site on the Millsap and Tien (2004) paper, I fear I have missed something crucial. Any guidance on the matter would be much appreciated.
There are different approaches for binary and polytomous items. With binary items, Step 2 adds scale factors differences across groups which makes the model not identified when the thresholds are different. With polytomous items, the Millsap-Tien approach can be followed.
Tom Booth posted on Saturday, March 09, 2013 - 11:35 am
Thank you for the very swift response. Just for clarity, my items are polytomous. From your response, I take it that in principal there is no issue following the Chen, Sousa and West sequence, so long as the identification constraints of Millsap-Tien are followed, and that these are different to the basic model specs I note above?
Tom Booth posted on Sunday, March 10, 2013 - 3:35 am
Thanks Bengt. I had thought from the discussions that with the categorical nature of the data and use of WLSMV, loadings and thresholds needed to be considered together, not split as in the above stages.
Tom Booth posted on Sunday, March 10, 2013 - 4:52 am
Sorry, I have a further follow up question. Within the sequence of models above, when thresholds are constrained across groups, scale factors are freed in the second group. I have 3 questions on this;
1- Is this correct? 2- Is this necessary? 3- If one then subsequently releases thresholds, partial invariance, do the associated item scale factors need to be fixed again?
Loadings and thresholds are considered together in the binary case.
Re your 4:52 post:
1. Scale factors are needed whenever you make comparisons of the factors, that is, in the metric and scalar cases.
2. Yes, because scale factors contain 3 things: Loadings, factor variances, and residual variances. So even when loadings are invariant, scale factors won't be - in particular you want to take into account the factor variance variation across groups.
3. You fix scale factors in the configural case because in that case you are not comparing factors across groups.
Tom Booth posted on Sunday, March 10, 2013 - 1:49 pm
I am doing a 4 group test of measurement invariance with ordered categorical items (4-point response set). The measure is invariant on loadings, but not on thresholds.
I would like to examine specific contrasts (ethnicity within gender and gender within ethnicity). Do you know of any problems with using the MODEL CONSTRAINT command to simultaneously examine threshold differences across items per my contrasts of interest? I am thinking it may simplify the analyses. I could not find an example in mplus examples or the literature...
Try VARIANCES = NOCHECK in the DATA command and if that doesn't resolve it, send files to Support.
marlies posted on Tuesday, October 15, 2013 - 6:49 am
Dear Linda and Bengt,
My question is the following, which has been asked before: I would like to test for measurement invariance using the difference in McDonald's non-centrality index (NCI) as recommended by Meade et al (2008) in "Power and Sensitivity of Alternative Fit Indices in Tests of Measurement Invariance" J Appl Psych.
You (Linda) replied that Mplus does not give an NCI index. However, since my sample is very big, I really would like to report it next to the CFI. Do you have any formula or idea how I can derive de NCI index (maybe from other given fit indices)?
You calculate the formula twice: once for you Configural Invariance model and once for your Measurement Invariance model. Then, you substract the value of the CI model from the value of the MI model. This is the final Mc Donald's NCI difference value you can report. The cut-off value for an invariant model differs per number of factors and items. In the article of Meade and Johnson (2008) you can find at page 586 a table with these cut-off points. (Meade, AW & Johnson, C (2008). Power and sensitivity of alternative fit indices in tests of measurement invariance. Journal of applied psychology, 93, 568-592)
Ian Koh posted on Friday, December 13, 2013 - 12:26 am
Dear Bengt and Linda,
I ran a test for factorial invariance (six-factor structure, with partial measurement invariance) across two groups following the steps mentioned in Byrne (2011). Out of curiosity, I'd like to ask: Are the configural model's parameters estimated using the whole sample, or are they estimated from the group samples? Thanks for your help.
The model for each group is estimated using the data from that group.
Ian Koh posted on Monday, December 16, 2013 - 4:38 pm
Thanks Linda! This question follows from my previous post (dated Friday, 13 December 2013).
Before fitting the configural model, I first fitted two baseline models: one for 5-year-olds and one for 6-year-olds. The 5-year-old group didn't require any modifications to the original model specification; however, the 6-year-old group required one extra cross loading, else there would've been a nonpositive definite matrix message. My configural model converged without any issues when including the extra cross loading for the 6-year-old group (as expected). However, the configural model also converged without any nonpositive definite matrix message when the cross loading was removed.
I also tested for factorial invariance over gender using the same model specification, encountering the same issue for the gender baseline models. (Namely, that the female group required one extra cross loading so that its solution would not have a nonpositive definite matrix error, while the male group required no modifications.) What puzzles me is that this nonpositive definite matrix issue was replicated in the gender configural model simply by removing the extra cross loading for the female group, but specifying the cross loading resulted in an admissible solution.
Why do these two configural models behave differently?
Please send outputs and data if possible. Let's focus on the 5 vs 6 year old runs, so send the 6-year old separate run with and without the cross-loading and the 2-group run of 5 and 6 year olds with and without the cross-loading.
Ellyn L. posted on Thursday, February 06, 2014 - 12:25 pm
Drs. Muthen and Muthen,
I am conducting a multiple group analysis and need to asses mean invariance. I have written syntax that runs successfully, but I'm not sure that I'm including (all of) the correct code. I have consulted both the Mplus user guide and blog posts, and I am looking for some confirmation/input on the syntax I am using to assess mean invariance. I have included the Model input information below. Thanks so much.
ANALYSIS: ESTIMATOR = MLR; MODEL: I ON M So; Su ON I P N; Sh ON I Su So P N; D ON Sh; E ON Sh; M WITH So P N; So WITH P N; P WITH N; MODEL B: [M-E @0];
I am conducting a multiple group confirmatory factor analysis with three comparison groups. The observed variables are categorical. I am using the Theta parameterization. The focus of the analysis is to test for construct invariance between the three groups. I currently have the factor variance, factor loadings and thresholds set to be estimated and equal between all groups (varying within groups, constrained between groups). I would like to do the same for the residual variances. However, as you know, when I use the Theta parameterization, the residual variances for the omitted group are set to 1. This means that to estimate the sought model of construct invariance, I must set the residual variances in the two comparison groups to 1. I have done this.
My question is: When I set the comparison group residual variances to 1, the values for the "Est./S.E." for the residual variances for the two comparison groups reads "Infinity." Is this a problem? Is there a fix for this or a work around? The output contains no fatal error reports.