Dear Dr Muthen, We are appealing to your expertice in order to solve a small but very interesting problem relative to our data interpretation. Specifically, we wonder whether statistically significant differences found after the comparison of baseline model with models in which we have kept intercepts invariant and /or latent factor means invariant, reflect quantitative or qualitative differences between our latent factors (memory and language competence). I would like to note that we have two groups, a group of children with language difficulties and one with typically developing children (each of 50 participants). The two groups have been tested on 4 memory measures and 5 on language ability (the two factors). Also, I would like to ask how can we compare the latent mean factors of the two groups given that the means on one group are fixed to zero. Many thanks in advance.
You are dealing with intercepts which are measurement parameters and factor means which are structural parameters. Differences in measurement parameters reflect that the factors do not mean the same thing for each group. Differences in structural parameters after measurement invariance of the factors has been established, reflect that the groups differ on those parameters.
When comparing models, one usually compares a set of nested models.
Because factor means must be zero in one group, to hold them equal you would fix them to zero in all groups.
I'm dealing with the measurement invariance. Since my variables are not normal I'm using MLMV as method of estimation. When I'm using this method the intercepts of the items are constraint to be equal among groups. How can I test the scalar invariance?
I am conducting the invariance testing of a measurement model that is a second order factor represented by 3 first order latent variable factors. The first order factor loadings are invariant across the 2 groups of interest. However the second order factors loadings (including the invariant first order factor loadings) indicate that they are not invariant (using chi-square difference test and a change of 2 degrees of freedom). When I conducted a Wald Chi-square test (using the Model test: command) on the 3 parameters of the second order factors loadings, the Wald test indicates no significant difference in the loadings across groups.
Does this sound odd or I am I potentially doing something incorrect? It seems logical to me that the Wald test should indicate which of the 3 factor loadings to the second order factor are differing across groups as the invariance test indicates.
I would need to see exactly what you are doing to answer this. Please send the two outputs and your license number to email@example.com.
Hervé CACI posted on Friday, January 20, 2012 - 3:58 am
Hello Drs Muthén,
I'm testing invariance across gender of a bifactor model: 1 general factor (18 items) and 2 specific factors (9 items each, name I1 to I18). I'm using the MLM estimator.
The model fits very well in both genders separately, but I need some advice regarding the next steps.
M1. I fitted the model in the entire sample freely estimating the intercepts in both groups ([I1-I18]) and fixing all three latent means to zero ([g@0F1@0F2@0]. The loadings are constrained to equality.
M2. I fitted the model in the entire sample freely estimating the intercepts in both groups, fixing all three latent means for zero, AND freely estimating all loadings. This is configural invariance.
3. A robust chi2 test indicates that model that model M2 fits better than M1, thus rejecting the weak invariance hypothesis.
Am I correct? Is it recommended to fix the latent means to zero or not at this early stage to invariance testing ?
You can see the inputs for testing measurement invariance under multiple group analysis in the Topic 1 course handout on the website.
When you free intercepts, factor means must be fixed to zero for model identification.
Skylar Son posted on Monday, February 11, 2019 - 10:34 am
I have a question in your 2018 paper (Recent methods for the study of measurement invariance with many groups: Alignment and random effects).
Looking at P.652, three models were fitted. M1 lets factor loadings be different on the two levels and lets residual variances on the between level be free. M2 holds factor loadings equal across levels, while still letting the between level residual variances be free. M3 holds factor loadings equal across levels and fixes the residual variances oh the between level to 0.
Can I understand M1 as configural model, M2 as weak invariance model and M3 as strong invariance model?
I think of this as a different situation. It's true that M1 is configural and M2 is weak (metric) but M3 is not strong because there are no means/intercepts involved.
Skylar Son posted on Monday, February 18, 2019 - 11:22 pm
Thank you for your reply.
You mentioned that M3 is not strong invariance model because there are no means/intercepts involved.
However, according to Kim et al.,(2017) paper "if intercepts are the same for all groups, that is, intercepts are not random across groups, the between-group variability of intercepts equal zero. This scalar invariance can be tested by constraining the between-level residual variance at zero." If it's correct, isn't M3 strong invariance model?
Kim, E. S., Cao, C., Wang, Y., & Nguyen, D. T. (2017). Measurement invariance testing with many groups: A comparison of five approaches. Structural Equation Modeling: A Multidisciplinary Journal, 24(4), 524-544.
I'm trying to run a Bayesian multiple group model with approximate measurement invariance using zero-mean and small-variance priors (ex5.33 in Mplus Guide) across 34 groups for 11 indicators of a single factor. My indicators are polytomous item. I tried to test the invariance of factor loadings and thresholds using DIFF option,
The analysis was performed well by modifying my code after seeing the answer. I dichotomized the polytomous items. After checking the output for my analysis, there is one more question.
It is understood that DIC (deviance information criterion) is calculated for Bayesian analysis. However, DIC was not calculated in my analysis. What is the reason? I checked PPP value from my output but DIS is not. Also, I confirmed that my analysis was correct.
I'm trying to run a Bayesian multiple group model with approximate measurement invariance using zero-mean and small-variance priors across 34 groups for 11 indicators of a single factor. I got PPP-value for the whole analysis and PPP-value for the 34 groups after performing the analysis. The PPP-value for whole analysis is 0.132 and the range of PPP-value for 34 groups is 0.295 - 0.515. I wonder how I interpret PPP-value for 34 groups. Should I simply refer to whole PPP-value for measurement invariance testing?