Bayesian measurement invariance? PreviousNext
Mplus Discussion > Confirmatory Factor Analysis >
 Fredrik Falkenstr÷m posted on Thursday, November 21, 2013 - 11:40 pm
Dear Drs MuthÚn,

Is it possible to estimate a Bayesian multi group invariance model such as described in Mplus web note no 17, that also estimates correlated residuals with zero mean and small variance priors as described in the article "Bayesian Structural Equations Modeling: A More Flexible Representation of Substantive Theory" separately for each group?

I have so far only been able to specify a model that constrains the residuals and residual correlations to be equal across groups, but this model has very poor fit.


Fredrik Falkenstr÷m
Link÷ping University
 Linda K. Muthen posted on Friday, November 22, 2013 - 9:52 am
Are you using MNIXTURE KNOWNCLASS? Have you mentioned the residuals and residual covariances in the class part of the model, for example,

y1 y2;
y1 WITH y2;
y1 y2;
y1 WITH y2;

The default is to hold these values equal across classes.
 Fredrik Falkenstr÷m posted on Monday, November 25, 2013 - 10:33 am
Thank you, I must have done something wrong the first times I tried because now it seems to work.
 Yoonjeong Kang posted on Sunday, December 08, 2013 - 9:41 am
Dear Drs. MuthÚns,

I have some questions about the Bayesian approach to measurement noninvariance (approximate measurement noninvariance) implemented in Mplus.

1.In Muthen and Asparouhov (2013) paper, it says that if a measurement parameter is significantly different from its average across groups, it is considered as noninvariance. What kind of test statistic is used in mplus to test this?

2.Based on user guide 5.33 example, it seems that all factor loading parameters are freely estimated (i.e., there is no equality constraint to a reference indicator across groups) and then differences of parameters relative to its average across group are tested. My question is that how factor scale in each group is determined when all factor loadings are freely estimated. Factor variance is standardized?

3.In frequentist approach, when we correctly choose an invariant factor loading as a reference indicator and fix it to 1 across groups, we can say that factors are on the same metric across groups. If approximate measurement invariance allows to vary all factor loading parameters across groups, I wonder whether estimated factor loadings are on the same metric or not. Could you let me know about this?

Thanks a lot in advance!!
 Bengt O. Muthen posted on Sunday, December 08, 2013 - 4:46 pm
1. Essentially a z-test.

2. The factor metric is set by fixing the factor variances to 1 in one group. UG ex 5.33 fixes them in the 10th group.

3. They are on the same metric to the approximation given by the small-variance prior.

Note also the alignment possibility presented in

Asparouhov, T. and MuthÚn, B. (2013). Multiple-group factor analysis alignment. Forthcoming in Structural Equation Modeling.

and also discussed in

van de Schoot, R., Tummers, L., Lugtig, P., Kluytmans, A., Hox, J. & MuthÚn, B. (2013). Choosing between Scylla and Charybdis? A comparison of scalar, partial and the novel possibility of approximate measurement invariance. Frontiers in Psychology, 4, 1-15. doi: 10.3389/fpsyg.2013.00770.
 S.Arunachalam posted on Friday, January 17, 2014 - 4:41 pm
Respected Prof. Muthen,
I am trying to use BSEM to establish approximate MI for four latent variables(LV). Two LVs have 4 items each & two other LVs have 3 items each. The problem I am facing is systematic approach to establish MI for multiple LVs. I am giving difference priors for the factor loadings and the item-intercepts with model=allfree.
While it is easy to manage BSEM for one LV, The moment I introduce the second LV the PPP value which was above 20 goes to zero. How to figure out where the problem is after introducing new LVs. Is it in the prior variance for DIFF? It becomes a greater challenge when I introduce the third LV when the model doesn't converge. So may I kindly request you for: what is the step by step approach for establishing approx MI when handling multiple LVs.Please advice. My sincere gratitude in advance.
 Bengt O. Muthen posted on Saturday, January 18, 2014 - 3:17 pm
Before you investigate measurement invariance, you want to (1) investigate each group separately with respect to all the latent variable constructs. (2) Then you can study invariance for each construct and (3) then invariance for the set of constructs. If step 1 and 2 have good fit, step 3 is likely to also have good fit.
 S.Arunachalam posted on Saturday, January 18, 2014 - 4:40 pm
Dear Prof. Muthen. Thanks a lot for the quick response.

1.)Should step 1 and 2 be established in ML or BAYES? In ML when I am checking the constructs for each group I can use MODINDICES to note if there are any residual covariances which could improve fit. However if I use BAYES I don't have MODINDICES. So it becomes a challenge to identify the reason for poor fit. So how to improve fit in BAYES.
2.) Moreover if I use BAYES for step 1 and 2 I ought to use informative priors for improved fit. Assuming that I have fit for individual groups; Now when I come to invariance checking should I still keep these informative priors or just use the DO DIFF alone for priors?
 S.Arunachalam posted on Saturday, January 18, 2014 - 5:54 pm
First I established step 1 in ML and the model fitted very well.

Second I established step 2 in bayes using mixture/knownclass individually for each construct. For the first construct I had a ppp value of .485 (No * in the difference output), and for second construct a ppp value of .500 (No * in difference output).

Third: When I try to bring these two constructs in the same CFA model the ppp goes to .041. So I tried introducing a difference prior for the covariance between the two constructs. Mplus output reports fatal error "set difference for only slopes, intercepts, and factor loadings."

Prof, Kindly please advice as to how I can find out where the problem is to improve PPP value before proceeding to add other constructs. My sincere gratitude in advance for your time & guidance!
 S.Arunachalam posted on Saturday, January 18, 2014 - 5:56 pm
(contd to the above comment) For the CFA with both constructs too; there was no * in the difference output.
 Bengt O. Muthen posted on Monday, January 20, 2014 - 5:18 pm
First, there are no fixed rules for how to go about these types of analyses. You just have to take an approach that is sensible from a statistical point of view. We cannot teach how to do this in short posts on Mplus Discussion.

Second, if ML for both constructs fits well then Bayes should fit well also. Changing the prior for factor covariance differences is not the right approach to take.
 S.Arunachalam posted on Tuesday, January 21, 2014 - 5:23 am
Thank you, Prof. Muthen. I am going back to doing this in ML as I don't know why the PPP value goes down just by bringing in two constructs, which individually fit well, into a single CFA. (And My sincere apologies for multiple posts, I realized it later. )
 Youngshin Ju posted on Thursday, May 16, 2019 - 9:23 am
Dear. Mplus team


I'm trying to run a Bayesian multiple group model with approximate measurement invariance using zero-mean and small-variance priors (ex5.33 in Mplus User's Guide) across 34 groups for 11 indicators of a single factor. My indicators are categorical variables that have a different number of response categories(e.g., one indicator is binary item and another item is polytomous item).
However, not all of my response categories of indicators are measured in every group. Is it possible to do Bayesian measurement invariance with this kind of missing data? I used Automatic Recoding in my code (detail is in next post). And I want to see if my analysis code is correct. This is because the error occurs. I'd like to check both factor loading and threshold's invariance with this model.

Due to posting length issue, my full code will remain in the next post.

Thanks a lot in advance!
 Youngshin Ju posted on Thursday, May 16, 2019 - 9:30 am
(some of the command is omit.)

CATEGORICAL = REGIS(*) | RECAL(*) | REPEAT(*) | COMMAND(*) | READ(*) | WRITE(*) | TIME(*) | PLACE(*) | ATTEN(*) | NAME(*); CLASSES = c(34);
KNOWNCLASS = c(AGE3 = 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34);


[REGIS$1 REGIS$2 REGIS$3 RECAL$1 RECAL$2 RECAL$3 REPEAT$1 (tau#_1-tau#_7)
COMMAND$1 COMMAND$2 COMMAND$3 READ$1 WRITE$1 DRAW$1 (tau#_8-tau#_13)
TIME$1 TIME$2 TIME$3 TIME$4 TIME$5 (tau#_14-tau#_18)
PLACE$1 PLACE$2 PLACE$3 PLACE$4 PLACE$5 (tau#_19-tau#_23)
ATTEN$1 ATTEN$2 ATTEN$3 ATTEN$4 ATTEN$5 NAME$1 NAME$2] (tau#_24-tau#_30);

DO(1,11) DIFF(lam1_#-lam11_#) ~ N(0, 0.01);
DO(1,30) DIFF(tau1_#-tau30_#) ~ N(0, 0.01);

Missing matching right bracket/brace.
No Matching ']' for REGIS$1 REGIS$2 REGIS$3 RECAL$1 RECAL$2 RECAL$3 REPEAT$1
 Youngshin Ju posted on Thursday, May 16, 2019 - 9:42 am
Oh, The factor metric is set by fixing the factor variances to 1 in last group. I missed it in my code. Sorry for that.
Back to top
Add Your Message Here
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Options: Enable HTML code in message
Automatically activate URLs in message