Message/Author 


Respected Prof. Muthen, I am trying to use BSEM to test multigroup (3 groups) invariance  measurement & structural. 1.) I am using Muthén & Asparouhov(2013). BSEM measurement invariance analysis, as the reference to conduct Measurement invariance.However I am not sure how to proceed with structural invariance within BSEM. Kindly guide me on the article for conducting structural invariance in BSEM framework. 2.) I found the following two articles while trying to conduct structural invariance. These articles elaborate on "testing informative hypoheses"; is it the same as testing structural invariance? Also these article refer to Rscript? Is it necessary to use the script always for conducting structural invariance? Rens van de Schoot, Marjolein Verhoeven & Herbert Hoijtink (2012): Bayesian evaluation of informative hypotheses in SEM using Mplus: A black bear story, European Journal of Developmental Psychology Van de Schoot, R., Hoijtink, H., Hallquist, M. N., & Boelen, P.A. (2012). Bayesian Evaluation of inequalityconstrained Hypotheses in SEM Models using Mplus. Structural Equation Modeling, 19, 593609 My apologies for an elaborate question. Basically, I am slightly confused with the steps to be taken to conduct structural invariance in a BSEM setup. Please guide me on the steps, & Mplus scripts with examples. Thanking you so very much in advance. Sincerely Arun 


Please see the analysis of the Science model at the end of our rejoinder to the comments on Muthén, B. & Asparouhov, T. (2012). Bayesian SEM: A more flexible representation of substantive theory. Psychological Methods, 17, 313335. If you send me an email I can send you the scripts for the Science model analyses in the rejoinder. This is not the same as the de Schoot et al articles. 


Dear Prof. Muthen Thank you very much. Sincerely Arun 


Regarding testing invariance hypotheses across groups using multiplegroup BSEM in line with Muthen & Asparouhov (2013). BSEM measurement invariance analysis. Web note 17. you would proceed just like with approximate invariance hypotheses for measurement parameters. So for a particular structural parameter you impose equality with zeromean, smallvariance priors and look to see where you find significant differences across groups. You can use Model Constraint to create a new parameter which is the difference between a structural parameter and its average across groups. This is done automatically and printed in the output for measurement parameters, but you can do it also for structural parameters. I don't have any scripts for doing this. 


Thank you Prof. Muthen. I will try as per your suggestions. 

Tait Medina posted on Thursday, January 23, 2014  11:44 am



I am trying to think through the difference between the following two approaches to detecting measurement noninvariance when indicator variables are continuous: (1) using an ML approach to multiplegroup analysis where all measurement parameters are constrained to be invariant across groups and modification indices are used to determine which parameters should be freely estimated. (2) using a BSEM approach to detecting invariant and noninvariant items as described in Web Note 17. I am wondering if you know of any work that compares these two approaches and if the same parameters are found to be invariant/noninvariant? Thank you. 


There is hardly any work on this to date. One related article is van de Schoot, R., Tummers, L., Lugtig, P., Kluytmans, A., Hox, J. & Muthén, B. (2013). Choosing between Scylla and Charybdis? A comparison of scalar, partial and the novel possibility of approximate measurement invariance. Frontiers in Psychology, 4, 115. doi: 10.3389/fpsyg.2013.00770. which is on our website. Two other approaches suitable for working with many groups are discussed in this paper on our website: Muthén and Asparouhov (2013). New methods for the study of measurement invariance with many groups. Mplus scripts are available here. 

Tait Medina posted on Thursday, January 23, 2014  2:24 pm



Thank you. 

Tait Medina posted on Thursday, April 03, 2014  12:31 pm



I am conducting a Bayesian multiple group model with approximate measurement invariance using 2 groups (for now). I am having difficulty understanding how the second column (headed Std. Dev.) is obtained. The first column appears to represent the average of the estimates across the groups, and the last columns the groupspecific deviations from the average which are starred if they are more than 2 times the std. dev (using .01 as the prior variance). But, how is the standard deviation obtained? DIFFERENCE OUTPUT Average Std. Dev. Deviations from the Mean LAM1_1 LAM2_1 1 0.935 0.027 0.006 0.006 LAM1_2 LAM2_2 2 0.938 0.026 0.013 0.013 LAM1_3 LAM2_3 3 1.000 0.031 0.030 0.030 


The standard deviation reported in this output is the standard deviation for the average parameter. After the model is estimated and the posterior distribution for every parameter is estimated we compute the posterior distribution for the average parameter. From there we get the standard deviation. The significance is also evaluated in Bayes terms. For each group specific parameter we compute the posterior distribution for the difference between the average parameter and the group specific parameter and if 0 is not between the 2.5% and the 97.5% quantiles of that posterior distribution we conclude that the difference is significant. 


I have a question about using a BSEM approach to detect noninvariance (as described in Web Note 17) of thresholds and loadings when outcome variables are dichotomous. Are the scale factors fixed at one in all groups? If yes, how might this assumption of equivalent scale factors (or residual variances) impact noninvariance detection/testing of thresholds and loadings? In "traditional" multiple group testing with categorical outcomes (Web Note 4, for example), one approach is to constrain thresholds and loadings to be invariant across groups, fix scale factors (or residual variances) at one in one group and estimate in the remaining groups, and then use modification indices to relax thresholds and loadings in tandem until a model that is as wellfitting as the configural model is obtained. Both the BSEM and Alignment approaches to noninvariance detection/testing with categorical outcomes seem to assume that the scale factors are invariant. Is this correct? If yes, is there a way to evaluate the validity of this assumption? Also, if BSEM is used for the purposes of noninvariance detection, do you recommend relaxing the invariance constraints on thresholds and loadings in tandem? Thank you. 


The steps in testing for measurement invariance are the same for most estimators. The Version 7.1 Language Addendum which is on the website with the user's guide describes the models to use in various situations. 


I am conducting a Bayesian multiple group model. When I saw TECH8 output, I found that there are"improper prior" associated with many parameters under "Simulated prior distributions." Could you please tell me what these messages mean? <selected> Simulated prior distributions Parameter Prior Mean Prior Variance Prior Std. Dev. Parameter 1 Improper Prior Parameter 2 Improper Prior Parameter 3 Improper Prior Parameter 4 Improper Prior Parameter 5 Improper Prior Parameter 6 Improper Prior Parameter 7 Improper Prior Parameter 8 Improper Prior Parameter 9 Improper Prior 


Please send the output and your license number to support@statmodel.com. 

deana desa posted on Friday, August 08, 2014  2:02 am



In Table 7 of MultipleGroup Factor Analysis Alignment paper, there are two indices called Fit Function Contribution and Rsquare. Is there any way to see these two indices and its values in the Mplus output? 


That should be available in Mplus Version 7.2, using the Align option in the Output command. 

deana desa posted on Tuesday, August 12, 2014  4:25 am



If I have an output like the following, I can see where to find the values for the Fit Function Contribution index, but it is unclear for me to calculate Rsquare: var(v0valphagLambda)/var(v0) So, is v0 an estimated value from the configural model for each group (am I wrong here?), are v and Lambda computed from the following output or other part of the output? It is said in the paper, that these are the averages. But are the averages from the following output or from the set of "invariant groups"? Could you explain more on this, I think I miss something here. Item Parameters In The Alignment Optimization Metric Loadings: Variables (Rows) by Groups (Columns) 0.681 1.312 .... Fit Function Contribution By Variable 158.121 134.042 113.305 33.941 Intercepts: Variables (Rows) by Groups (Columns) 0.415 0.139 .... Fit Function Contribution By Variable 229.849 199.831 213.806 32.836 Factor Means 0.113 0.284 ... Factor Variances 0.879 1.032 ... 


These Rsquare values are also given in the Version 7.2 output when requesting the Align option in the Output command. 

Back to top 