Respected Prof. Muthen, I am trying to use BSEM to test multigroup (3 groups) invariance - measurement & structural. 1.) I am using Muthén & Asparouhov(2013). BSEM measurement invariance analysis, as the reference to conduct Measurement invariance.However I am not sure how to proceed with structural invariance within BSEM. Kindly guide me on the article for conducting structural invariance in BSEM framework. 2.) I found the following two articles while trying to conduct structural invariance. These articles elaborate on "testing informative hypoheses"; is it the same as testing structural invariance? Also these article refer to R-script? Is it necessary to use the script always for conducting structural invariance? Rens van de Schoot, Marjolein Verhoeven & Herbert Hoijtink (2012): Bayesian evaluation of informative hypotheses in SEM using Mplus: A black bear story, European Journal of Developmental Psychology Van de Schoot, R., Hoijtink, H., Hallquist, M. N., & Boelen, P.A. (2012). Bayesian Evaluation of inequality-constrained Hypotheses in SEM Models using Mplus. Structural Equation Modeling, 19, 593-609 My apologies for an elaborate question. Basically, I am slightly confused with the steps to be taken to conduct structural invariance in a BSEM setup. Please guide me on the steps, & Mplus scripts with examples. Thanking you so very much in advance. Sincerely Arun
you would proceed just like with approximate invariance hypotheses for measurement parameters. So for a particular structural parameter you impose equality with zero-mean, small-variance priors and look to see where you find significant differences across groups. You can use Model Constraint to create a new parameter which is the difference between a structural parameter and its average across groups. This is done automatically and printed in the output for measurement parameters, but you can do it also for structural parameters. I don't have any scripts for doing this.
Thank you Prof. Muthen. I will try as per your suggestions.
Tait Medina posted on Thursday, January 23, 2014 - 11:44 am
I am trying to think through the difference between the following two approaches to detecting measurement non-invariance when indicator variables are continuous:
(1) using an ML approach to multiple-group analysis where all measurement parameters are constrained to be invariant across groups and modification indices are used to determine which parameters should be freely estimated.
(2) using a BSEM approach to detecting invariant and non-invariant items as described in Web Note 17.
I am wondering if you know of any work that compares these two approaches and if the same parameters are found to be invariant/non-invariant?
There is hardly any work on this to date. One related article is
van de Schoot, R., Tummers, L., Lugtig, P., Kluytmans, A., Hox, J. & Muthén, B. (2013). Choosing between Scylla and Charybdis? A comparison of scalar, partial and the novel possibility of approximate measurement invariance. Frontiers in Psychology, 4, 1-15. doi: 10.3389/fpsyg.2013.00770.
which is on our website.
Two other approaches suitable for working with many groups are discussed in this paper on our website:
Muthén and Asparouhov (2013). New methods for the study of measurement invariance with many groups. Mplus scripts are available here.
Tait Medina posted on Thursday, January 23, 2014 - 2:24 pm
Tait Medina posted on Thursday, April 03, 2014 - 12:31 pm
I am conducting a Bayesian multiple group model with approximate measurement invariance using 2 groups (for now). I am having difficulty understanding how the second column (headed Std. Dev.) is obtained. The first column appears to represent the average of the estimates across the groups, and the last columns the group-specific deviations from the average which are starred if they are more than 2 times the std. dev (using .01 as the prior variance). But, how is the standard deviation obtained?
The standard deviation reported in this output is the standard deviation for the average parameter. After the model is estimated and the posterior distribution for every parameter is estimated we compute the posterior distribution for the average parameter. From there we get the standard deviation. The significance is also evaluated in Bayes terms. For each group specific parameter we compute the posterior distribution for the difference between the average parameter and the group specific parameter and if 0 is not between the 2.5% and the 97.5% quantiles of that posterior distribution we conclude that the difference is significant.
I have a question about using a BSEM approach to detect non-invariance (as described in Web Note 17) of thresholds and loadings when outcome variables are dichotomous. Are the scale factors fixed at one in all groups? If yes, how might this assumption of equivalent scale factors (or residual variances) impact non-invariance detection/testing of thresholds and loadings? In "traditional" multiple group testing with categorical outcomes (Web Note 4, for example), one approach is to constrain thresholds and loadings to be invariant across groups, fix scale factors (or residual variances) at one in one group and estimate in the remaining groups, and then use modification indices to relax thresholds and loadings in tandem until a model that is as well-fitting as the configural model is obtained. Both the BSEM and Alignment approaches to non-invariance detection/testing with categorical outcomes seem to assume that the scale factors are invariant. Is this correct? If yes, is there a way to evaluate the validity of this assumption? Also, if BSEM is used for the purposes of non-invariance detection, do you recommend relaxing the invariance constraints on thresholds and loadings in tandem? Thank you.
The steps in testing for measurement invariance are the same for most estimators. The Version 7.1 Language Addendum which is on the website with the user's guide describes the models to use in various situations.
deana desa posted on Friday, August 08, 2014 - 2:02 am
In Table 7 of Multiple-Group Factor Analysis Alignment paper, there are two indices called Fit Function Contribution and R-square. Is there any way to see these two indices and its values in the Mplus output?
That should be available in Mplus Version 7.2, using the Align option in the Output command.
deana desa posted on Tuesday, August 12, 2014 - 4:25 am
If I have an output like the following, I can see where to find the values for the Fit Function Contribution index, but it is unclear for me to calculate R-square:
So, is v0 an estimated value from the configural model for each group (am I wrong here?), are v and Lambda computed from the following output or other part of the output? It is said in the paper, that these are the averages. But are the averages from the following output or from the set of "invariant groups"? Could you explain more on this, I think I miss something here.
Item Parameters In The Alignment Optimization Metric
Loadings: Variables (Rows) by Groups (Columns) 0.681 1.312 .... Fit Function Contribution By Variable -158.121 -134.042 -113.305 -33.941
Intercepts: Variables (Rows) by Groups (Columns) -0.415 0.139 .... Fit Function Contribution By Variable -229.849 -199.831 -213.806 -32.836
Factor Means 0.113 -0.284 ... Factor Variances 0.879 1.032 ...
I am trying to estimate a BSEM model with approximate invariance on structural parameters. When I estimate my model with approximate invariance on the structural parameters, with a zero mean and small variance prior (.005 or .001), the structural parameters become more equal across the two groups compared to the same model when these structural parameters are estimated without approximate invariance (freely estimated in the two groups).
For example, in the model without approximate invariance, the two parameters are .274 and .276 in group 1 and .373 and .483 in group 2. When I impose approximate invariance on these parameters (prior variance = .005), the two parameters are .313 and .396 in group 1 and .327 and .418 in group 2. Hence, they become more equal when imposing approximate invariance. I thought the priors in this case refer to the difference between the parameters, how come that affects the strength of the parameters?
1. Can you recommend any litterature that describes what type of analysis is being conducted when the MODEL CONSTRAINT command is used in BSEM to examine differences in structural parameters?
2. When I try to estimate differences in structural parameters between two groups in a BSEM using the MODEL CONSTRAINT command, the parameters tested change considerably (they decrease) compared to when I estimate the same model without the MODEL CONSTRAINT command and the parameter labels. Do you have any idea why that might be?
My model is a multigroup BSEM with zero mean, small-variance priors for cross-loadings and residual correlations in the measurement part within each group. Below is the structural part of my model and the MODEL CONSTRAINT setup. Is there something wrong with the setup?
This is not possible. The diff parameters are computed after the model estimation is completed. The diff parameter distribution is obtained from the estimated model parameter distribution. Try to perform this for a very simple model in an attempt to figure out where the coding problem is. If that doesn't work send you input and data to email@example.com
deana desa posted on Sunday, May 31, 2015 - 4:05 pm
I'm running an alignment analysis with Mplus 7.3. In the output command I wrote OUTPUT: TECH1 TECH8 ALIGN;
May I know where in the align output section to find/get R-square measure for intercepts and loadings?
deana desa posted on Sunday, May 31, 2015 - 4:31 pm
Is R-square/Explained variance/Invariance index in the align output section is R-square derived in Eqs. 13-14 Webnote18?
deana desa posted on Thursday, June 11, 2015 - 7:02 am
I am running BSEM. This is what I found from From MODEL=ALLFREE:
95%CI for the Difference Between the Observed and the Replicated Chi-Square Values (1259.638,1558.687) and PPP=0. PSR close to 1.
Then, two follow-up from the above model with residual correlations are added in the modeling and the associated priors are specified that is IW distributed (priors for diff. in lambdas and tau are the same). This is what I found:
1. when d=1500-->95%CI diff in Observed and the Replicated Chi-Square (61.244, 339.394), PPP=.004, PSR at 10000th iteration is 1.147.
2. when d=1000-->95%CI diff in Observed and the Replicated Chi-Square (-24.500 ,249.989), PPP=.057, PSR at 10000th iteration is 1.187.
3. when d=750-->95%CI diff in Observed and the Replicated Chi-Square (-66.752,206.599), PPP=.142, PSR at 10000th iteration is 1.08.
4. when d=550-->95%CI diff in Observed and the Replicated Chi-Square (-89.878,177.410), PPP=.257, PSR at 10000th iteration is 1.08.
My questions are: 1. I have model 2-4 that showed PPP>.05, but the speed of convergence is better at model 3. Am I wrong here? Also, can I use model 3 as my final model that it fits the data the better?
2. From models 2-4, can I select my final model with the smallest different in 95%CI of chi-square values?
I assume that you are following the recommended steps 5 and 6 in the paper on our web site:
Asparouhov, T., Muthén, B. & Morin, A. J. S. (2015). Bayesian structural equation modeling with cross-loadings and residual covariances: Comments on Stromeyer et al. Accepted for publication in Journal of Management.
Also, don't use a fixed number of iterations like your 10,000. Instead use
which gives the minimum number of iterations - this way you are certain that convergence has occurred in the last iteration.
If you have followed steps 5 and 6 and changed to BITER, an outcome like your 3. is where you can stop. You can then decide if you want to use this BSEM model or have it be the basis for a CFA model as in our paper.
deana desa posted on Saturday, June 13, 2015 - 3:01 pm
Thanks, Dr. Muthen!
Another general question, why when using BSEM DIFF function in taus (thresholds) is not allowed to be specified? Is there any ways to unconstrained the differences in taus with some priors?
1. Not by default - the metric is whatever you set it to be in the Model command.
2. Not unless the mean and variance of the factor is 0, 1 in the Model.
deana desa posted on Friday, August 14, 2015 - 2:44 am
I set the metric to 0 and 1 for both invariance with BSEM and scalar models, then I requested the factor scores for further analysis. I computed the correlation for the individual observations between the two scores. It is very highly correlated at .9. When summarize by group means, the correlation is small (.45). The mean scores from scalar invariance ranging from -4 and 6, but the one from BSEM is always close to 0.
Do you have any idea why the group means computed from BSEM always close to 0?
We need to see the relevant output and explanations to say. Send to support along with your license number.
Lois Downey posted on Wednesday, November 04, 2015 - 10:49 am
I'm running my first BSEM models, using the procedure outlined for the Holzinger and Swineford example in the Muthen-Asparouhov 2012 article, and modifying the script used for run7.out (the model without cross-loadings) to accommodate my data.
I'm testing a 4-factor, 12-indicator model. Eleven of the indicators are ordered categorical variables, the twelfth is a dichotomy, and I'm using WLSMV estimation. The dataset includes 2,478 cases.
My first two attempts (with fbiter = 10000 and 40000, respectively) failed the Kolmogorov-Smirnov distribution test, but an increase to 100,000 iterations appears to have solved that problem.
However, in the section of the Tech 8 output that lists "simulated prior distributions," the thresholds for all of the ordered categorical indicators have an indication "improper prior." The Tech 1 output indicates the priors for each of these thresholds as ~N(0.000, infinity).
What do I need to do to correct the problem of improper priors? (I assume that this problem renders the rest of the results questionable.)
You can ignore both the K-S test and the improper prior statement. We have found the K-S test to be too strict and improper priors can still lead to proper posteriors which is all that matters.
But I wonder why you say you use WLSMV estimation - you must mean Bayes.
Lois Downey posted on Thursday, November 05, 2015 - 7:24 am
Thank you. Yes, of course, I am using Bayes estimation -- comparing the Bayes results to those from the same model estimated with WLSMV, which had yielded a statistically significant chi-square test of fit. Sorry for misstating!
I now have another question. The 2012 article indicates that "the Bayes estimates can be used as fixed parameters in an ML analysis to get the likelihood-ratio test value for the Bayes solution." When I do that using the ML estimator without montecarlo integration, I get a message that there are 50,625 integration points, and I should reduce the number of integration points or use montecarlo integration. When I do the latter, I get a message that the frequency table for the latent class indicator model part is too large to compute the chi-square test; then, the loglikelihood is given for only H0. Does this imply that for a model as large as mine, with ordered categorical indicators, I cannot obtain the LRT?
The frequency table test is a separate thing from the H0 logLikelihood so you are fine.
Lois Downey posted on Friday, November 06, 2015 - 6:53 am
But don't I need the likelihood values for both H0 and H1 to compute the LRT? Or do I get the LRT from something else on the output from the run with the ML estimator and the fixed parameter estimates?
My output shows only the likelihood for H0, not the likelihood for H1.