Message/Author 


Dear Drs Muthén, Is it possible to estimate a Bayesian multi group invariance model such as described in Mplus web note no 17, that also estimates correlated residuals with zero mean and small variance priors as described in the article "Bayesian Structural Equations Modeling: A More Flexible Representation of Substantive Theory" separately for each group? I have so far only been able to specify a model that constrains the residuals and residual correlations to be equal across groups, but this model has very poor fit. Best, Fredrik Falkenström Linköping University Sweden 


Are you using MNIXTURE KNOWNCLASS? Have you mentioned the residuals and residual covariances in the class part of the model, for example, %c#1% y1 y2; y1 WITH y2; $c#2% y1 y2; y1 WITH y2; The default is to hold these values equal across classes. 


Thank you, I must have done something wrong the first times I tried because now it seems to work. 


Dear Drs. Muthéns, I have some questions about the Bayesian approach to measurement noninvariance (approximate measurement noninvariance) implemented in Mplus. 1.In Muthen and Asparouhov (2013) paper, it says that if a measurement parameter is significantly different from its average across groups, it is considered as noninvariance. What kind of test statistic is used in mplus to test this? 2.Based on user guide 5.33 example, it seems that all factor loading parameters are freely estimated (i.e., there is no equality constraint to a reference indicator across groups) and then differences of parameters relative to its average across group are tested. My question is that how factor scale in each group is determined when all factor loadings are freely estimated. Factor variance is standardized? 3.In frequentist approach, when we correctly choose an invariant factor loading as a reference indicator and fix it to 1 across groups, we can say that factors are on the same metric across groups. If approximate measurement invariance allows to vary all factor loading parameters across groups, I wonder whether estimated factor loadings are on the same metric or not. Could you let me know about this? Thanks a lot in advance!! 


1. Essentially a ztest. 2. The factor metric is set by fixing the factor variances to 1 in one group. UG ex 5.33 fixes them in the 10th group. 3. They are on the same metric to the approximation given by the smallvariance prior. Note also the alignment possibility presented in Asparouhov, T. and Muthén, B. (2013). Multiplegroup factor analysis alignment. Forthcoming in Structural Equation Modeling. and also discussed in van de Schoot, R., Tummers, L., Lugtig, P., Kluytmans, A., Hox, J. & Muthén, B. (2013). Choosing between Scylla and Charybdis? A comparison of scalar, partial and the novel possibility of approximate measurement invariance. Frontiers in Psychology, 4, 115. doi: 10.3389/fpsyg.2013.00770. 


Respected Prof. Muthen, I am trying to use BSEM to establish approximate MI for four latent variables(LV). Two LVs have 4 items each & two other LVs have 3 items each. The problem I am facing is systematic approach to establish MI for multiple LVs. I am giving difference priors for the factor loadings and the itemintercepts with model=allfree. While it is easy to manage BSEM for one LV, The moment I introduce the second LV the PPP value which was above 20 goes to zero. How to figure out where the problem is after introducing new LVs. Is it in the prior variance for DIFF? It becomes a greater challenge when I introduce the third LV when the model doesn't converge. So may I kindly request you for: what is the step by step approach for establishing approx MI when handling multiple LVs.Please advice. My sincere gratitude in advance. 


Before you investigate measurement invariance, you want to (1) investigate each group separately with respect to all the latent variable constructs. (2) Then you can study invariance for each construct and (3) then invariance for the set of constructs. If step 1 and 2 have good fit, step 3 is likely to also have good fit. 


Dear Prof. Muthen. Thanks a lot for the quick response. 1.)Should step 1 and 2 be established in ML or BAYES? In ML when I am checking the constructs for each group I can use MODINDICES to note if there are any residual covariances which could improve fit. However if I use BAYES I don't have MODINDICES. So it becomes a challenge to identify the reason for poor fit. So how to improve fit in BAYES. 2.) Moreover if I use BAYES for step 1 and 2 I ought to use informative priors for improved fit. Assuming that I have fit for individual groups; Now when I come to invariance checking should I still keep these informative priors or just use the DO DIFF alone for priors? 


First I established step 1 in ML and the model fitted very well. Second I established step 2 in bayes using mixture/knownclass individually for each construct. For the first construct I had a ppp value of .485 (No * in the difference output), and for second construct a ppp value of .500 (No * in difference output). Third: When I try to bring these two constructs in the same CFA model the ppp goes to .041. So I tried introducing a difference prior for the covariance between the two constructs. Mplus output reports fatal error "set difference for only slopes, intercepts, and factor loadings." Prof, Kindly please advice as to how I can find out where the problem is to improve PPP value before proceeding to add other constructs. My sincere gratitude in advance for your time & guidance! 


(contd to the above comment) For the CFA with both constructs too; there was no * in the difference output. 


First, there are no fixed rules for how to go about these types of analyses. You just have to take an approach that is sensible from a statistical point of view. We cannot teach how to do this in short posts on Mplus Discussion. Second, if ML for both constructs fits well then Bayes should fit well also. Changing the prior for factor covariance differences is not the right approach to take. 


Thank you, Prof. Muthen. I am going back to doing this in ML as I don't know why the PPP value goes down just by bringing in two constructs, which individually fit well, into a single CFA. (And My sincere apologies for multiple posts, I realized it later. ) 

Back to top 