Yes, free the intercepts if you want to test only the invariance of the factor loadings.
Tim Jackson posted on Monday, November 17, 2008 - 6:08 am
Hi there, I am trying to test for configural invariance in a 5 factor model, across two time points. In other words, I want to test whether the 5 factor structure is tenable across the two time points.
In order to test configural invariance I have done the following when writing the syntax: 1) specified the factor structure for each time point (i.e., indicators loading on their intended latent factors, and letting the latent factors within a given time point covary with each other). Note that I did not specify that latent factors should correlate with other latent factors at a different time point, 2) fixed the means of each of the latent factors to zero, 3) freed the intercepts of each of the items at each of the time points, 4) allowed like items to correlate with each other across time points (e.g., item1_t1 with item1_t2); this was done to deal with correlated residuals across time.
My question is simply 'have I done this correctly?' I have in my possession some Mplus syntax examples in which configural invariance of a SINGLE construct is tested across groups, or across time points... but I do not have examples of tests of configural invariance in Mplus using a MULTIDIMENSIONAL structure across time points. I just want to take care that I am conducting this test correctly before trying to interpret any output.
Thank you very much for any input you might be able to provide, Tim
I would not take the approach you describe. See instead the Topic 4 course handout for multiple indicator growth. The first steps in this analysis test for measurement invariance across time. With more than one factor, take the same approach. Allow the factors to correlate.
See pages 398-401 in the user's guide. See also the Topic 1 course handout on measurement invariance for general principles related to continuous outcomes and the Topic 2 course handout for an application to categorical outcomes.
I have a battery of six items/statements measuring ethnocentrism with four scoring possibilities. I have used it on the same population of adolescents in 2006 and 2008. Because the indicator is categorical I am not 100% if the syntax below is correct to conclude longitudinal invariance (it gives good GOF-indicators) including for the factor loadings. I am especially doubtfull if I can correlate the error terms this way...
Variable: Names are et1-et6 ethno1-ethno6; Usevariables are et1-et6 ethno1-ethno6; categorical ARE ALL; Missing are ALL(99);
analysis: estimator = WLSMV type = meanstructure; Model: RACE1 by et1 et2 (2) et3 (3) et4 (4) et5 (5) et6 (6); RACE2 by ethno1 ethno2 (2) ethno3 (3) ethno4 (4) ethno5 (5) ethno6 (6); et1 with ethno1; et2 with ethno2; et3 with ethno3; et4 with ethno4; et5 with ethno5; et6 with ethno6;
See the discussion of testing for measurement invariance with categorical outcomes and the end of the multiple group discussion in Chapter 14 of the Version 6 user's guide, Chapter 13 of earlier user's guides. You would use the same models across time rather than across groups. See also the multiple indicator growth example in the Topic 4 course handout. You can correlate the error terms if you use the weighted least squares estimator but it is more difficult with maximum likelihood estimation because each residual covariance is one dimension of integration.
Xiaoying Xu posted on Thursday, December 02, 2010 - 11:09 am
Hi, I have problem to run the factorial invariance for a second-order model. First, I run a multiple-group CFA with ordered categorical data using Mplus 5.2 and trying to test a model. The code I am using is: TITLE: multiple-group CFA cfa for 4 factor DATA: file is "C:\r_4mp.txt"; format is 22f1.0; VARIABLE: names are s1-s22; usevariables are s1-s20; grouping is s21 (0=female 1=male); categorical are all; MODEL: r1 by s1-s5; r2 by s6-s10; r3 by s11-s15; r4 by s16-s20; The model fit is acceptable( CFI: 0.961, TLI:0.974, RMSEA: 0.026ï¼ŒWRMR Value: 1.692). When I try to do this for the second-order model by adding to the last code: Genfact by r1-r4; It didnot give a estimate of CFI and massage shows: THE MODEL ESTIMATION TERMINATED NORMALLY
THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE COMPUTED. THE MODEL MAY NOT BE IDENTIFIED. CHECK YOUR MODEL. PROBLEM INVOLVING PARAMETER 49.
You need to fix the intercepts of the first-order factors to zero in all groups for the model to be identified.
Xiaoying Xu posted on Saturday, December 04, 2010 - 9:09 pm
Hi, Linda, you mentioned "fix the intercepts of the first-order factors to zero" (in the user's book, it is called the factor mean, instead of intercepts), but they are same thing, right? Could you let me know if I understand wrong?
I tried to fix the intercepts of the first-order factors to zero, and it work well now. Thank you!
The first-order factors have intercepts estimated not means because they are dependent variables regressed on the second-order factor. The Mplus language is the same for means and intercepts.
Xiaoying Xu posted on Sunday, December 05, 2010 - 9:09 pm
On the user's book, p. 399-400, measurement invariance for continuous outcomes, steps 2-4 add one more constraint each time. So the procedure is, Step 1 we need to run the baseline model without any constraint first, secondly we add constrains for equal loading, thirdly we add equal intercept. For categorical variables, there are only two models, first is baseline model without any constraint, and second is the threshold and loadings constrained to be equal.
My question is, is the threshold a concept for categorical data which is very similar counterpart to the intercept/mean for continuous data? I am wondering if I can add one constraint each time. For example, I add constraint for factor loading in step 2, and add thresholds constraints for step 3. How do you think about that?
If the threshold and loadings constraints could not be treated separately, then should I fix the first item loading but free the first item threshold? For model identification purpose, I need to fix the first item loading at 1. Shall I also fix the fir item threshold at some value accordingly?
Please correct me if I understand anything wrong. I really appreciate your help about this and any suggested reading about factorial invariance procedure for categorical data.
I want to test Measurement Invariance in a 1 Factor Model with multiple-groups comparisons for ordered categorical data. I use the WLSMV estimator.
Before starting nested comparisons, in order to rule out model misspecification, i fit the model separately for each group. However, when I do this, I get different chi-square df for every group (67 vs. 61). Why is that normal because WLSMV adjusts for deviations from normality which may be different between groups or do I need to be worried?
It sounds like you are using a version of Mplus before Version 6. The degrees of freedom and chi-square in these earlier versions were adjusted to obtain a correct p-value. Only the p-value should be interpreted. You will obtain the degrees of freedom you expect using WLS or WLSM.
I fit a multi-group configural invariance model, with one latent construct with three manifest indicators (and #groups=2). My question is about the Chi-sq statistic. When I fit the same model within one sample only, it is just identified and has a chi-sq of zero (number of free parameters=9 and df=0). When I fit it to both groups via a configural invariance model, the chi-sq is non-zero. Isn't the configural model also just identified and shouldn't I thus be getting a zero chi-sq value? In the configural invariance model, Mplus says that the number of free parameters is 16 and df=2.
The default in Mplus is to hold intercepts and factor loadings equal across groups. You need to relax these constraints. See the Topic 1 course handout under multiple group analysis for an example input.
steve posted on Saturday, November 05, 2011 - 10:45 am
Dear Linda and Bength,
Im trying to do an invariance multigroup model over five groups starting with configural invariance. The model includes two latent factors measured by two items each. Unfortunately I did the math wrong and the model is not identified (don't rightly know why) - is there any workaround to fix this?
You should not mention the first factor loading in the group-specific MODEL commands. When you do, they are no longer fixed at one but free causing the non-identification message.
steve posted on Monday, November 07, 2011 - 3:03 am
Thank you! This solved the identification issue. However, the model wont converge. I guess the problem is that the indicators are sum scores of the respective rounds of cognitive tests resulting in different metrics, i.e. range 2-16 for Item 1 in AG1 and 10-29 in AG5. Could this be the problem?
Try freeing the first factor loadings and fixing the factor variances to one. It may be that the first factor indicator is not close to one causing problems when it is fixed at one. If this does not help, please send your output and license number to email@example.com.
steve posted on Monday, November 07, 2011 - 9:08 am
Hi- I am running a multi group model with categorical data. Below is my syntax and the error message I am receiving in the output. Is there a different way I should be writing the final line under Model males for categorical data? When I remove it, it runs without error. Thanks! ~Bethany
Model: Legal by Pol, OthCJ, GovtVic; Health by Med, Emo, Phys, PrivVic;
I am running a multigroup (gender) CFA model (2 latent factors, 10 items) to test for configural invariance. Below is my syntax and error message:
GROUPING is sex (0 = female 1 = male); ANALYSIS: ESTIMATOR = MLR; MODEL: direct by apf4 apf5 apf2 apf1 apf6 apf7; indirect by apf11 apf12 apf16 apf14; MODEL female: direct by apf5 apf2 apf1 apf6 apf7; indirect by apf12 apf16 apf14; [direct@0indirect@0]; [apf1 apf2 apf4 apf5 apf6 apf7 apf11 apf12 apf14 apf16]; OUTPUT: sampstat modindices (10.00) tech1 stand residual;
“Model terminated normally. The standard errors of the model parameter estimates could not be computed. Model may not be identified. Check your model. Problem involving parameter 60.”
Parameter 60 is an alpha error. When I remove items apf4 and 11 (the ones fixed at 1.0) from the brackets, it runs, but I need to test a model with all model parameters free except for the first factor metrics (set to 1.0) and means (set to 0). Is there a different way I should be running this?
A factor with one indicator and a residual variance of zero is identical to the factor so you should simply use the observed variable. If you want to correct for reliability, see the Topic 1 course handout under measurement error. I personally don't think this is a good idea because it is highly likely that the estimate of reliability is not accurate bringing more problems to that variable.
I think you are probably forgetting to fix the factor means of the first-order factors to zero in all groups. Without this, the model is not identified. If this is not the problem, please send the output and your license number to firstname.lastname@example.org.
Maria posted on Thursday, January 31, 2013 - 4:04 am
I would like to test for measurement invariance across gender. After doing some reading it appears I should
1. test for configural invariance by running a CFA on the measurement model for males and females separately
2. test for metric invariance (I have ordinal/categorical data) using multi-group CFA.
Some articles suggest that the chi square obtained in step 2 should be the sum of the chi squares obtained for males and females separately.
Dear Linda/Bengt, Reading your slides, a doubt about Multiple-group Factor Analysis appeared: in the slide (82/196) from UCONN's conference about Holzinger-Swineford's Model fit information (invariance testing) configural and metric model had p-value<0.00001. However, metric against configural p-value =0.2755. Issue: 1) Why, in the separately form, configural model is rejected while against the metric model it is not? Partial invariance involing configural model? Thanks in advance
The test says that it is the Metric model that is not rejected as compared to the Configural model. In this case where the Configural model doesn't fit, it is unclear how useful this information is. One can perhaps say that the Metric model doesn't fit much worse than the Configural. But the test itself is called into question when the more relaxed model, the Configural model, does not fit. The test may not have a chi-square distribution in that case. The intended use of chi-square difference testing is that the more relaxed model fits and you are interested in seeing if a more restricted model doesn't fit significantly worse.
I am trying to examine 2nd order factor invariance across two groups. Fairly early in the process, at the configural invariance stage, I run into a problem.
I fail to freely estimate the intercepts of the observed variables--both groups share the same indicator intercepts in the output I get. Also, I believe all factor means (1st order latent factor means and 2nd order latent factor mean) should equal zero in both groups, but the output I obtain provides an estimate for the 1st order means of the second group (while all other latent means are zero).
This is the syntax I am using:
Model: ICE by BYT13 BYT14 BYT15 BYT16; CA by BYTE21CR BYTE21AR BYTE21BR BYTE21DR; AP by BYS89V BYS89J BYS89O BYS89S; CAC by BYTXMIRR BYTXRIRR; CR by ICE CA AP CAC; [BYT13 - BYTXRIRR]; [ICE CA AP CAC CR@0];
Model Hispanic: ICE by BYT14 BYT15 BYT16; CA by BYTE21AR BYTE21BR BYTE21DR; AP by BYS89J BYS89O BYS89S; CAC by BYTXRIRR;
The first-order means in the second group must be fixed to zero. This is not the default in Mplus.
RuoShui posted on Monday, October 28, 2013 - 3:06 pm
I am testing measurement invariance over four time points. The model has poor CFI and TLI (around .68). But after I adopted the modification indices by correlating error variance among items, the model fit improved to adequate. I am wondering whether modification indices can be adopted? Does it defeat the purpose of testing measurement invariance?
You should fit the model at each time point separately as a first step. If you do not have the same well-fitting model at each time point, you should not test for measurement invariance. When you combine them, it may be that correlating residuals across time is necessary.
Anna Koch posted on Tuesday, July 29, 2014 - 2:19 am
Dear Linda or Bengt,
I am testing measurement invariance for a second-order factor model. Everything works perfectly well until I constrain the intercepts of the first-order latent factors to be equal (testing for strong factorial invariance). According to the output Mplus neither constrains the intercepts to be equal nor gives an error message. I just keep getting the exact same results as when testing for strong factorial without constraining the intercepts of the first-order latent factors. It seems like Mplus ignores the additional syntax paths...I double checked the syntax. Do you have an idea what might went wrong?
I'm having some difficulty replicating manually the results provided by mplus for measurement invariance across group using the MODEL = Config, metric, scalar command.
Initially I achieved measurement invariance over time for my scales of interest. I now want to make appropriate constraints across gender.
The results suggest that the scales have configural and metric invariance across groups, but not scalar. To work out where the noninvariance is, I have tried to program mplus manually to do run the same models. However, when I try to replicate the results I cannot get the model to converge. The error messages suggests the model may not be identified, but if I'm just copying the model directly, but including 'MODEL FEMALE' and 'MODEL MALE' subcommands, I don't understand why the model will not converge. I've tried several things to fix the problem but cannot seem to replicate the findings. Can you provide any insights as to why this might be?
You are most likely mentioning the first factor indicator in the group-specific MODEL commands. When you do this, the first factor loading is no longer fixed to one. The inputs for testing for measurement invariance for continuous indicators are shown in the Topic 1 course handout on the website under multiple group analysis. The inputs for categorical outcomes is shown in the Topic 2 course handout.
I've just seen a response from Bengt in response to someone else's question that I think is relevant. I am trying to do a multi-group comparison for boys and girls, using a measurement model that already has a number of across-time constraints. Bengt's reply was:
Model = configural metric scalar;
will ignore any parameter equality settings.
We don't yet have that kind of convenience feature for longitudinal or combined multi-group/long'l, so all of it has to be done "by hand" with explicit equalities."
So therefore the multi-group fit indices I was getting did not also have the across time constraints (which my manually coded model had), which presumably explains why they did not match up?
on the same note as my message above, I would like to constrain some of my parameters across time, and I also want to test for invariance across group simultaneously. Once parameters are constrained to be the same over time, is it possible to also get different estimates for the parameters across groups, while maintaining the across time constraints?
I've been struggling with a configural invariance model (2 time points).
My 5 indicators are binary (so I'm using WLSMV), and my no. of obs. is 284 at Time 1 and 238 at Time 2.
The one-factor CFA models look good at both time points (X2>.05 both times, CFI=.95 & .99, and WRMR=.79 & .72).
However, when I attempt to assess configural invariance, I get an error message ("...a correlation greater or equal to one between 2 latent variables,..."), and my factor correlation is indeed greater than 1.
What might be the cause of that? I'm at loss. Everything else looks fine (that I can tell). Any help/advice would be appreciated!
I checked the models for the two groups (i.e. male and female, where the male is the reference group) and the two models have close degree of fit (CFI, TLI, and RMSEA) are excellent, and the item loading are also good. If the two groups fit the model, I do not understand why the configural model is significant! Do you have any suggestions for making the configural invariance better between the two groups?