Message/Author 


Hi, I've conducted a multiple group analysis testing factor loading invariance across 23 groups using a 2 factor CFA with MLM estimation. The corrected chisquare difference test comparing this constrained model to an unconstrained model was nonsignificant, suggesting that the factor loadings are invariant across groups. My question is about reporting results. I would like to include a figure showing the standardized factor loadings and factor correlations. But, the standardized loadings differ across groups despite the fact that I imposed cross group equality constraints on the loadings (the parameter specification indicates that the constraints were correctly imposed and the estimates for the unstandardized loadings are the same across groups). I understand that the standardized loadings vary because of the variances used to calculate them, but I'm afraid that stating that I imposed equality constraints and then showing in the figure that the loadings are still different across groups will confuse readers, especially in the case where the standardized loadings are different by what would appear to be a nontrivial amount (e.g., difference of 0.07). I'm wondering if anyone has a suggestion for how to handle this apparent inconsistency. Thanks, Jen 


I prefer to work with raw coefficients and would definitley not report standardized coefficients for a multiple group analysis for the reasons you state. If you must report them, then I would include the explanation you have given. Maybe someone else has an opinion on this. 

kberon posted on Saturday, December 31, 2005  12:29 pm



I've also been interested in standardized coefficients across multiple groups. Lisrel has a feature that allows you to weight each group covariance matrix so that you end up having a common scale for all groups. This allows reporting a single "beta." I was wondering if Mplus had this facility? It sounds, from Linda's comment, that it doesn't but I'd like to confirm that. Thanks....Kurt 


No, Mplus does not have this facility. 

finnigan posted on Friday, March 23, 2007  9:27 am



Linda/Bengt I have five factor model which I'm testing across two groups. I have a suspicion that the the five factors will not replicate across groups,i.e in one group 4 factors emerge and in the second group 5 factors may emerge. If this is the case am I correct in saying that within group comparisions can be made ,but between group comparisons on factor means cannot? Or is it more appropriate to estimate one model for both groups and then test the factor structure? Any refs you may have would be appreciated. Thanks 


You should look at the factor structure in each group separately as a first step. If they don't have the same number of factors, then going on to look at them together is not appropriate unless four of the five factors are the same which would be fairly unusual I think. 

finnigan posted on Friday, March 23, 2007  3:34 pm



If the same number of factors is not present across groups, is it reasonable to carry out a within multi group CFA for each separate group and make within group comparisons on latent means once measurement invariance is present? thanks 


If you don't have the same number of factors in each group, you can look at the groups separately. I'm not sure what you mean by doing a multiple group CFA for each separate group since you would then be looking at a single group. 

Brian Hall posted on Monday, July 27, 2009  3:01 pm



Dear list, I am testing a twogroup CFA model testing for metric and configural invariance. I am extending this model to establish longitudinal invariance over three waves of data collection. I am testing several correlated models, and several hierarchical models. Does anyone happen to have example programming syntax that can help with this model? I am not sure how to fix the paths to be equal in the case of metric invariance, and to compare the models in the case of configural invariance. Any assistance would be much appreciated. Brian 


See the Topic 1 course handout where all of the steps for testing for measurement invariance using multiple group analysis are given including inputs. See the Topic 4 course handout where the first steps in the multiple indicator growth model show the steps for testing for longitudinal measurement invariance. 

Joe posted on Thursday, March 18, 2010  1:33 pm



Hi, I have 3 latent factors, each BY 16 dichotomous observed items. I would like to run an invariance analysis with the same factors across groups, but factor loadings, unique error variances, and item thresholds are freely estimated. I am getting an error that reads, "THE MODEL ESTIMATION TERMINATED NORMALLY THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE COMPUTED. THE MODEL MAY NOT BE IDENTIFIED." Here is part of my syntax: GROUPING IS disabil (0=RegEd, 1=SpEd); ANALYSIS: PARAMETERIZATION=THETA; MODEL: numop BY fp11@1 fp12 fp13 fp14 fp15 fp16 fp17 fp18 fp19 fp110 fp111 fp112 fp113 fp114 fp115 fp116; geo BY fp21@1 fp22 fp23 fp24 fp25 fp26 fp27 fp28 fp29 fp210 fp211 fp212 fp213 fp214 fp215 fp216; numopalg BY fp31@1 fp32 fp33 fp34 fp35 fp36 fp37 fp38 fp39 fp310 fp311 fp312 fp313 fp314 fp315 fp316; MODEL SpEd: numop BY fp11@1 fp12 fp13 fp14 fp15 fp16 fp17 fp18 fp19 fp110 fp111 fp112 fp113 fp114 fp115 fp116; geo BY fp21@1 fp22 fp23 fp24 fp25 fp26 fp27 fp28 fp29 fp210 fp211 fp212 fp213 fp214 fp215 fp216; numopalg BY fp31@1 fp32 fp33 fp34 fp35 fp36 fp37 fp38 fp39 fp310 fp311 fp312 fp313 fp314 fp315 fp316; Can you tell how my model is misspecified? 


You should not be freeing the factor loading for the first indicator in the groupspecific MODEL command. This causes the model not to be identified. See pages 398401 for the models to use for testing measurement invariance for categorical outcomes. See also the Topic 2 course handout. 

Joe posted on Thursday, March 18, 2010  3:00 pm



Thank you, Dr. Muthen. I thought the factor loading for the first indicator in the groupspecific MODEL command was fixed with @1 (e.g., numop BY fp11@1). How do I then appropriately fix this factor loading in the syntax? Could you please give an example? Thank you in advance for your time and assistance. 


I misread your input. Then I don't know what the problem is. Please send the full output and your license number to support@statmodel.com. 


hi, i'm testing configural invariance for a 4 group, 2 latent factor, and 15 item (4 level likert scales) model. i'm getting the error: "THE MODEL MAY NOT BE IDENTIFIED." could you help me fix my input file? (the model stmts for the last 2 groups are the same as for month4.) thanks! MODEL: dsclCncn BY S5Q1@1 S5Q2*0.5 S5Q3*0.5 S5Q5*0.5 S5Q7*0.5 S5Q9*0.5 S5Q14*0.5; persStig BY S5Q4@1 S5Q6*0.5 S5Q8*0.5 S5Q10*0.5S5Q13*0.5 S5Q15*0.5; dsclCncn WITH persStig*0.5; [S5Q2$2* S5Q3$2* S5Q5$2* S5Q7$2* S5Q9$2* S5Q14$2* S5Q1$3* S5Q2$3* S5Q3$3* S5Q5$3* S5Q7$3* S5Q9$3* S5Q14$3*]; [S5Q6$2* S5Q8$2* S5Q10$2*S5Q13$2* S5Q15$2* S5Q4$3* S5Q6$3* S5Q8$3* S5Q10$3*S5Q13$3* S5Q15$3*]; MODEL month4: dsclCncn BY S5Q2* S5Q3* S5Q5* S5Q7* S5Q9* S5Q14*; persStig BY S5Q6* S5Q8* S5Q10*S5Q13* S5Q15*; dsclCncn WITH persStig*0.5; [S5Q2$2* S5Q3$2* S5Q5$2* S5Q7$2* S5Q9$2* S5Q14$2* S5Q1$3* S5Q2$3* S5Q3$3* S5Q5$3* S5Q7$3* S5Q9$3* S5Q14$3*]; [S5Q6$2* S5Q8$2* S5Q10$2*S5Q13$2* S5Q15$2* S5Q4$3* S5Q6$3* S5Q8$3* S5Q10$3*S5Q13$3* S5Q15$3*]; 


When you free the thresholds, factor means must be fixed to zero in all groups and, if you are using the default estimator WLSMV, scale factors must be fixed to one in all groups. See the models for testing for measurement invariance in Chapter 14 of the Version 6 user's guide after the multiple group discussion. 


thanks for your quick response! i added the following model stmt for all the groups: [dsclCncn@1 persStig@1]; and am getting a message about no convergence. here's the analysis block: TYPE = MGROUP; PARAMETERIZATION = THETA; ITERATIONS = 9999; ESTIMATOR = WLS; when i test for invariant loadings and/or thresholds, i am not having this problem. it's only occuring w/ the configural invariance test. do you have any suggestions? any help would be appreciated! andrea 


The factor means should be fixed to zero not one. With the Theta parameterization, residual variances of the factor indicators should be fixed to one in all groups. 


yea! that did the trick. i wanted to make sure i was doing the battery of tests correctly: (1) baseline model, free thresholds (outlined above)  factor means set to zero in all groups, residual variances of factor indicators fixed to one in all groups (2) invariant loadings, free thresholds  factor means set to zero in all groups, residual variances of factor indicators fixed to one in first group and free in other groups (3) invariant loadings and thresholds  factor means set to zero in first group and free in other groups, residual variances of factor indicators fixed to one in the first group and free in other groups (4) invariant loadings, thresholds, and uniqueness  factor means set to zero in first group and free in other groups, residual variances of factor indicators fixed to one in all groups. thanks again for all your help! andrea 


We recommend models 1, 3, and 4 for categorical outcomes. 


hi, i'm working with a different set of variables now; however, everything is set up as above. there are 4 groups: month0, month4, month8, month12 i am getting the error message: THE WEIGHT MATRIX FOR GROUP MONTH12 IS NOT POSITIVE DEFINITE can you explain what this error message means and how i might go about fixing it? also, why does it happen for one group and not all groups? i don't know if this matters, but the data is very skewed for all groups. i rescaled an 11 pt likert scale into a 3 pt likert scale where: 03 > 1 46 > 2 710 > 3 should i set the scale differently so that the data is not so skewed? thanks again for your guidance! andrea 


Please send the full output and your license number to support@statmodel.com. 

Hans Leto posted on Tuesday, April 10, 2012  10:33 am



Hello, I am performing a single group analysis with the same syntax of slide 209 of Mplus' handout no. 1: USEOBSERVATIONS ARE (gender EQ 1); !change 1 to !0 for females MODEL: f1 BY y1y5; f2 BY y6y10; But is gives me an error "Variable is uncorrelated with all other variables: gender. All least one variable is uncorrelated with all other variables in the model. Check that this is what is intended." (The variable is gender, variance 0). could you help me? Thank you in advance 


You must have gender on the USEVARIABLES list. This is only for variables used in the analysis. Remove it. 


I am testing a measurement model at two waves (i.e., wave 3 and wave 4). The model is the same at each wave. Since, both measurement models will eventually be added to a twotime point longitudinal SEM, I was advised to test for measurement invariance between the measurement models of the two waves. My estimator is WLSMV and I have categorical and continuous variables. Can I simply use the DiffTest option to do a multigroup comparison or is there another test that is more appropriate? Thanks!! 


No, the groups would not be independent. You would test the measurement invariance in a singlegroup analysis. The multiple indicator growth model example in the Topic 4 course handout shows how to do this for continuous variables. You would take the same approach for categorical variables but use the steps shown in the Topic 2 course handout under multiple group analysis. 


Okay. Thanks for the reply. I will try this out. 


Do you know what heading or page that I would need to refer to in Topic 2 and Topic 4 courses? 


See the table of contents. For Topic 4, look for multiple indicator growth. For Topic 2, look for multiple group analysis. 


I see. Thanks! 


Hi I am doing a MGCFA, 3 groups. When testing each group separately, everything is fine and models are identified. when multigroup modeling however, I get this message: THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE COMPUTED. THE MODEL MAY NOT BE IDENTIFIED. CHECK YOUR MODEL. PROBLEM INVOLVING PARAMETER 61. parameter 61 is for alpha (I guess something related to intercept) of item 14 in group 2. As I told you when doing the analysis separately in group 2, the model converges. It is noteworthy that the MGCFA converges when using Amos on the same data. Sounds like this is something specific to Mplus way of calculating estimates. In the MGCFA, the unstandardized coefficient for item 14 in group 2 is 1.292, while other items’ coefficients are smaller. This is the only aspect of this item different from others. Interestingly, when I omit this item in the model, still it does not converge and says something is wrong with item 13. Can you please help me resolve this? Many thanks, ebi 


Please send the output and your license number to support@statmodel.com. 


Hi Thanks for your message. I think I found the problem. while in Amos, especifying factor means to be 0 leads to unidentification in a multigroup analyis, in mplus things are different. I had not specified the factor mean to be zero in my multigroup analysis in mplus, which resulted in an intercept issue in one of the groups. setting it to be zero resolved the issue. 


Hi I have a question and I would appreciate if you could answer. in testing for scalar invariance, when we find that an intercept is not invariant, in order to follow up this finding it is attractive to compare the intercepts in various groups. In doing so, should we compare the unstandardization intercepts or the standardized (Est./S.E.) ones? the results are sometimes contradictive. for example, based on the unstandardization estimates group 1 has the highest score but based on the standardized estimates, group 4 has the highest score. what do you suggest? the same question can be asked about comparing the intercepts in a number of groups, estimating separate models for each (not multigroup CFA). which one should be used for comparing scale origins across groups: the unstandardized or starndardized estimates? a marginal question is whether or not we could only examine observed item means in separate samples to follow up a noninvariant intercept. Thank you very much in advance, Ebi 


I am not sure I understand the situation, but it seems that you should use the unstandardized intercepts since you are doing an unstandardized analysis when you compare groups. 


Thank you very much. Because this question is so critical for me at this point, I decided to give an example. I have three groups. One item intercept is noninvariant. Below the unstandardized intercepts, standardized intercepts, and simple observed item means for each group are reported respectively: g1: 4.354 72.197 4.35 g2: 4.738 73.280 4.74 g3: 4.492 61.631 4.01 the unstandardized and standardized intercepts are from the multigroup analysis for testing scalar invariance. Now, I would like to report actual intercept differences among these three groups for that item to understand the results better. As you can see it is so tricky. If I use the unstandardized intercepts, I should conclude that group 1 had the lowest scale origin (4.354), and the main difference is that group 2 scored remarkably differently from the other two groups. Alternatively, if I use the standardized intercepts, I should conclude that group 3 had the lowest scale origin (61.631), and the noninvariance is because this group scored remarkably differently from the other two groups. So the question is: in this multi group analysis, should I use standardized or unstandardized intercepts for comparison. Thank you very much , Ebi 


You should not use standardized values such as intercepts when you compare groups. This is because standardization confounds the parameters of interest with groupvarying variances. Note also the the item mean is a function of the item intercept, the factor loading and the factor mean. 


Thank you very much professor Muthen. really helpful. 

H Steen posted on Tuesday, June 04, 2013  5:12 am



Hi, I have a question about interpreting differences in model fit between groups. Before doing a multigroup second order CFA, I compared the models in the groups separately. And the results are so different that I have no reason to investigate any type of invariance any further. However the results are a bit surprising and I would like your comments. Comparing low, medium and high educated groups results in a mediocre model fit for the low educated (RMSEA 0.075) and reasonable fit in de high educated group (0.041). The groups are about the same size, N varies between 302 and 330. The thing is, the MI's are very similar; in all groups 10 is the highest, and almost all are (much) lower. Furthermore, the factor loadings are much higher in de low educated group, which seems contradictory, as the model fit is worse. The low educated group have much smaller variance in the items in the CFA though, and I would like to whether this can explain the combination of worse fit and higher loadings. With small variance, there is less to model anyway. Other types of Analysis (eg Mokken) which are not to be preferred though for my research, show best structure fit for the low educated. Shortly stated, can it be the case that the fit is indeed best in de low educated group as the factor loadings are highest? Thank you very much for any comments. 


Fit assessments are affected by the size of correlations among the observed variables  higher correlations give higher power to reject, ceteris paribus. hose correlation sizes may vary across your groups. 

H Steen posted on Thursday, June 06, 2013  2:10 am



Thank you for your response. You are right, in the lower educated group (which shows worst fit) the correlations are much higher. But what that this imply? That the fit indices are not meaningful? Is there a way to correct fit indices for the level of interrelatedness of items? 


You should take a look at for instance Saris, Satorra, & Veld (2009) in the SEM journal about the weaknesses of fit indices. 

H Steen posted on Thursday, June 06, 2013  8:42 am



Thank you very much! 


Hi I managed to fit the model in each group, but it doesn't seem to fit in a multogroup CFA. I am performing a multi group CFA with a four factor solution across two groups. I followed the handout to the letter, but I get this message:  MAXIMUM LOGLIKELIHOOD VALUE FOR THE UNRESTRICTED (H1) MODEL IS 7520.177 THE MODEL ESTIMATION TERMINATED NORMALLY THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE COMPUTED. THE MODEL MAY NOT BE IDENTIFIED. CHECK YOUR MODEL. PROBLEM INVOLVING PARAMETER 110. THE CONDITION NUMBER IS 0.318D06.  The syntax is as follows: USEVARIABLES ARE A1 A2 A3 A5 B1 B2 B3 B4 C5 C6 C7 C9 D3 D4 D5 D7; GROUPING IS Group (1=GROUP1 2=GROUP2); MISSING are all (666); MODEL: IA BY A1A5; OE by B1B4; SRM by C5C9; WRM by D3D7; [IA@0 OE@0 SRM@0 WRM@0]; MODEL GROUP2: IA BY A1A5; OE by B1B4; SRM by C5C9; WRM by D3D7; [A1A5 B1B4 C5C9 D3D7]; OUTPUT:STANDARDIZED MODINDICES SampStat Residual; 


You should not mention the first factor indicators in the groupspecific MODEL command. When you do this, they are no longer fixed at zero and the model is therefore not identified, 


That has just made my evening! Thanks, Linda. 

Liting Cai posted on Tuesday, January 14, 2014  1:56 am



Hi, In an earlier post, as well as in the handout for Topic 2, it was mentioned that to conduct multiple group analysis on categorical variables, the steps are slightly different from that if the variables are continuous. The difference is that for categorical variables, we do not conduct the following steps: (1) Fix the factor loadings across groups and free the thresholds, and (2) Fix the thresholds across groups and free the factor loadings. May I find out, what should I do if I have both continuous and categorical variables in my multiple group factor analysis? Do I go with the steps for categorical variables? 


You can go with the steps for the categorical variables for all the variables, but you can also do all steps for the continuous variables. 

Liting Cai posted on Thursday, January 16, 2014  6:57 pm



Dear Dr Muthen, Thank you for taking time to address my query! May I understand the rationale for the different steps (for multiple group analysis) for continuous and categorical variables? Is there a paper that provides this rationale, that you could direct me to? 


See the Millsap 2011 book. 

Liting Cai posted on Friday, January 31, 2014  1:48 am



Thanks! Will check out the book. 


Hello, I am trying to conduct multiple group invariance testing using categorical indicators for continuous latent factors. When I try to release the factor loadings and the thresholds for my second group, I get the following error: THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE COMPUTED. THE MODEL MAY NOT BE IDENTIFIED. CHECK YOUR MODEL. PROBLEM INVOLVING PARAMETER 42. The TECH output tells me Param 42 is the second diagonal entry in the THETA matrix for my second group. I'm not sure what I'm doing wrong: VARIABLE: NAMES ARE DUMMY PTSD6PTSD25; USEVARIABLES PTSD6PTSD25; CATEGORICAL ARE PTSD6PTSD25; GROUPING is DUMMY (0 = NOTRAUMA 1 = TRAUMA); ANALYSIS: ESTIMATOR = WLSMV; PARAMETERIZATION = THETA; MODEL: F1 BY PTSD6PTSD13@1; F2 BY PTSD14PTSD25@1; F1; F2; F1 with F2; PTSD6PTSD25; [PTSD6$1PTSD25$1@1]; MODEL TRAUMA: F1 BY PTSD7PTSD13; F2 BY PTSD15PTSD25; [PTSD6$1PTSD25$1]; Any ideas what I am doing wrong? Help is very much appreciated! 


See the Version 7.1 Mplus Language Addendum on the website. You will find described in this document a way to do invariance testing automatically and a full description of the models for testing for measurement invariance in various situations. When factor loadings and thresholds are free across groups, residual variances should be one in all groups and factor means should be zero in all groups. 

JW posted on Wednesday, September 10, 2014  4:53 am



Hi Linda, Based on your post from December 23, 2005, when conducting multiple group CFA it is best to report the unstandardised loadings especially as the standardised ones won't be equal across groups even after specifying loading invariance. My goal is to present the loadings of the CFA and  after showing invariance  to have them equal across groups; could I specify the code as memory by x1  x10; memory@1; [memory@0]; so that I can also obtain an estimate for the loading of the first indicator (x1) or would it be wrong to impose mean = 0 and variance = 1 for both groups groups? Thanks 


If you want to set the metric by fixing the factor variance to one, you must free the first factor loading. memory by x1  x10*; memory@1; [memory@0]; Fixing the metric this way is an alternative to fixing a factor loading to one. 

JW posted on Wednesday, September 10, 2014  6:59 am



thank you! 

JW posted on Thursday, September 11, 2014  2:15 am



One more question: in a standard (not multigroup) CFA is it still advisable to present the unstandardised coefficients? Thanks! 


You can present whichever coefficients you want. 

JW posted on Thursday, September 11, 2014  8:09 am



Thanks! 


Hi there, kberon made a comment earlier above in 2007, and I wasn’t sure if the answer had chance now that MPlus has developed. Specifically, kberon stated that: "I’ve also been interested in standardized coefficients across multiple groups. Lisrel has a feature that allows you to weight each group covariance matrix so that you end up having a common scale for all groups. This allows reporting a single "beta." I was wondering if Mplus had this facility?” The answer at the time in 2007 was no. Has this changed? If so, are you able to direct me to the relevant part? Simon 


No, this has not been implemented. But you can do it using Model Constraint. 


Thanks! How would one do it by Model Constraint? Do you have an example? 


No example. Just follow whatever formula you have in mind. 


Can we do MGCFA with a single factor model that is tested accross 4 conditions? I only have one latent variable with 4 observed variables. 


Yes. 

Uzay Dural posted on Monday, November 09, 2015  12:55 pm



Dear Dr. Muthen, I conducted multigroup CFA on 4 continous indicators, but received the following error: THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE COMPUTED.THE MODEL MAY NOT BE IDENTIFIED. CHECK YOUR MODEL.PROBLEM INVOLVING PARAMETER 6 (lambda of one item  g13  for female group.) Is it related to any error in the syntax? The syntax is as follows:  USEVARIABLES ARE gender g11 g12 g13 g14; GROUPING IS gender (0=f 1=m); MISSING ARE ALL(999); ANALYSIS: TYPE IS GENERAL; ESTIMATOR IS ML; OUTPUT: STANDARDIZED SAMPSTAT MODINDICES (4) RESIDUAL tech1 tech4; MODEL: giat1 BY g11* (L1) g12* (L2)g13* (L3) g14@1 (L4); [g11*](I1);[g12*](I2);[g13*](I3); [g14@0](I4); g11* (E1); g12* (E2); g13* (E3); g14* (E4); giat1*; [giat1@0]; g11 WITH g12* (ecov12); MODEL M: giat1 BY g11* g12* g13* g14@1; [g11g14*]; g11g14*; giat1*; [giat1@0]; g11 WITH g12*;  Thanks, Uzai 


The line giat1 BY g11* (L1) g12* (L2)g13* (L3) g14@1 (L4); doesn't give the labels properly because of no semicolons in between (like on the next line). If this doesn't help, send output to Support along with your license number. 


Do intercepts need to be fixed to 0 when testing measurment invariance or can they be fixed to another constant? I am testing longitudinal measrement invarience across 5 waves. In the less restrictive model the thresholds are much closer to 6. When I fix the intercepts to be equal over time beginning with fixing the first indicator intercept at 0. Not only does the model demonstrate a substantial and significant decrease in fit, freeing the different intercepts does not result in a substantial improvement in model fit. 


You can fix them to any value. But different values should not give different fit. 


Thank you for the quick response. They do give very different values. Perhaps there is something I missed? I posted the syntax below. sport3 by twkw3 hwkw3 numw3 (23); sport5 by twkw5 hwkw5 numw5 (23); sport6 by twkw6 hwkw6 numw6 (23); sport7 by twkw7 hwkw7 numw7 (23); sport8 by twkw8 hwkw8 numw8 (23); [twkw3@0 twkw5@0 twkw6@0 twkw7@0 twkw8@0]; [hwkw3 hwkw5 hwkw6 hwkw7] (4); [numw3 numw5 numw7 numw8] (5); 


Looks like you would get all factor means fixed at zero. Instead, you want to have free factor means for sport58. 


Dear, We are testing the reliability and validity of four stigma scales (internal/external and towards TB/HIV). We also tested for measurement invariance across two groups (patient staff vs. support staff). We showed that two scales are full scalar invariant and two are only full metric invariant. Now in the next step we are (a) examining the correlations between the four stigma scales and (b) estimatings SEMs which assess the correlations between the stigma scales and other related concepts (e.g. confidentialiy) to test the external construct validity. Now my questions: (1) If we test the correlations between the stigma constructs, should we do this seperately for the two groups (patient staff and support staff) OR should we do them all in one time (all metric invariant) OR seperately for the metric invariant scales and in one go for the scalar invariant scales? (2) If we want to test the correlations with opther concepts, do we need a multiple group for the two groups seperately? OR does the metric invariance allow us to test the models for the two groups together? In other words, is metric invariance enough to employ the construct in future analyses without divining the dataset into the groups? Many thanks in advance, Edwin 


You may want to ask these analysis strategy questions on SEMNET. 


Dear Linda/Bengt, We performed MGCFA with six groups. According to changes in RMSEA and CFI between more and less stringent models, the 12item questionnaire is measurement invariant, but according to the DIFFtest, it is not. We want to additionally calculate an effect size measure, based on Meade (2010, a taxonomy of effect size measures). We need all factor loadings and thresholds for that additional calculation in R, but what would be the correct model to select factor loadings and thresholds from? Would it be the configural model with equal variances and free factor loadings across groups? Or would you go for a model in which one factor loading per factor is set at one, and variances are held free across groups? I ask this because effect size results depend on which model I choose. For example, all factor loadings change when I set the variances at 1, or when I fix two different loadings at 1, even though the total model fit does not change. A related question is if it is possible to check if the model is not identified in one group. Only in one of the groups, one item has a negative loading, and one of the factor means is unexpectedly far below that of the reference group. So we expect that something goes wrong there, although this is not observed in the model fit statistics. Thank you in advance, Henrike Galenkamp 


I think you should post this question on SEMNET. 

Back to top 