BSEM Measurement Invariance PreviousNext
Mplus Discussion > Structural Equation Modeling >
Message/Author
 S.Arunachalam posted on Wednesday, June 26, 2013 - 8:08 am
Sample size 300.I have totally 5 latent variables with multiple indicators (continuous): 3 independent and 2 dependent.
Both the IVs and DVs are answered by the same respondent, however with regards to three different firms i.e.) All the five variables are scales requesting the respondent to answer about Firm A, firm B and firm C. The survey looks like:
Firm AFirm BFirm B
Factor 1
1. Rate satisfaction of service with (1 to 5):
2
3…
Factor 2
1. Rate you ability … (1 to 5)
2..
3..
Factor 5
1.. , 2… , 3….

For CFA, to account for the correlated residuals I used Bayesian SEM (BSEM) and have got robust fit indices and factor lodgings, Scales show very good fit.
Can I check measurement invariance using BSEM webnote 17; however I don't have a grouping variable, so I can't use Type=mixture & knownclass? Please advice how to use BSEM Measurement invariance for single group.
 Bengt O. Muthen posted on Wednesday, June 26, 2013 - 2:29 pm
BSEM measurement invariance for a single group is like the longitudinal example in web note 17.
 S.Arunachalam posted on Thursday, June 27, 2013 - 6:09 am
Dear Prof. Muthen. Thank you very much. I am using the same setup as in the longitudinal example. however the model is not converging. These are the steps I am following:
1.) I am using correlated CFA model as per the BSEM 2012 article. The model worked very well.
2.) I am testing BSEM measurement invariance using approximate invariance. The model is not converging for iterations 50000 and 100000.
Please advice.
 S.Arunachalam posted on Thursday, June 27, 2013 - 8:14 am
Dear Prof. Muthen: I am in touch with you on this through email. Kindly ignore the above doubt here.
 S.Arunachalam posted on Thursday, June 27, 2013 - 9:25 am
For my analysis (point 2 in my above post) I am still getting posterior probability to be 0. I first tried with DIFF variance of 0.1, then .01, and then .001. Still PPP is 0.
I having three indicators with * in the difference output though.
 Bengt O. Muthen posted on Thursday, June 27, 2013 - 9:52 am
PPP of zero can have many causes. We need to see more details to give advice - please send input, output, data and license number to support.
 Daniel Seddig posted on Sunday, February 02, 2014 - 8:05 am
Hello. Aside from considering the benefit and meaning: I am wondering if it is possible to specify approximate measurement invariance across time and groups simultaneously in Mplus Bayesian CFA? I haven't found a way to expand the label assigning feature together with type=mixture, knownclass and the do/diff options to include both types of invariance. Is there any?
 Bengt O. Muthen posted on Monday, February 03, 2014 - 9:45 am
I'll email an example to you.
 Yoonjeong Kang posted on Thursday, February 06, 2014 - 12:37 pm
Dear Drs. Muthen,

I have a question about prior distributions in testing approximate measurement invariance. In Muthen & Asparouhov article (2013), prior distributions for DIFFERENCES between parameters were used to test approximate measurement invariance.

Loading1-Loading2 ~ N(0,0.01)

Because variance of 0.01 may represent different magnitude of variability of differences depending on scales of factors’ indicators, instead of priors for differences between parameters, I think that using priors for PROPORTIONS of two parameters can be used.
So we may assign priors such as

Loading1/Loading2 ~N(1, small variance)

I tried to use this in Mplus, but Mplus gives error message.
“Unknown parameter label: Loading1/Loading2”

Q1. What do you think about using priors for proportions, not for differences in tests of approximate measurement invariance?
Q2. I don’t know why Mplus recognizes "Loading1/Loading2" as a new label although it recognizes "Loading1-Loading2" well. I also created new parameter for proportion (L1= Loading1/Loading2) under the model constraint and then assign a prior to new parameter (L1), but Mplus still gives error message (Unknown parameter label). Is there any way to use priors for proportions in this case?

Thanks a lot in advance!!!

Yoonjeong
 Bengt O. Muthen posted on Thursday, February 06, 2014 - 2:16 pm
The scale of the variables does influence the prior variance choices as you say. The size of a loading is related to the SD of the variable. Also, different variables may have very different SDs. But if you are concerned about this, I think you could transform your variables to be on a more similar scale and then check the sensitivity to prior variances.

If you want to work with ratios, you have to declare those parameters as "NEW" parameters in Model Constraint before applying priors to them.
 Yoonjeong Kang posted on Friday, February 07, 2014 - 12:10 pm
Dear Dr. Muthen,
Thank you so much for your clarification!
Yes. It would be one option to transform variables to be on a similar scale. I got it! To work with ratios, I followed your advice. I declared parameters as "New" parameters in model constraint and apply priors to those new parameters but it didn't work. Could you let me know if there is anything wrong in my code?(for simplicity, I didn't include priors for intercept terms)

MODEL:
%OVERALL%
F1 BY X1-X4;
X1-X4*;
F1*;[F1*];
%CG#1%
F1 BY X1-X4(CG1X1-CG1X4);
[X1-X4*](CG1IX1-CG1IX4);
X1-X4*;
F1@1;[F1@0];
%CG#2%
F1 BY X1-X4(CG2X1-CG2X4);
[X1-X4*](CG2IX1-CG2IX4);
X1-X4*;
F1*;[F1*];

MODEL CONSTRAINT:
NEW(LX1 LX2 LX3 LX4);
LX1=CG1X1/CG2X1;
LX2=CG1X2/CG2X2;
LX3=CG1X3/CG2X3;
LX4=CG1X4/CG2X4;
MODEL PRIORS:
LX1 ~ N(1,0.001);
LX2 ~ N(1,0.001);
LX3 ~ N(1,0.001);
LX4 ~ N(1,0.001);

*** ERROR in MODEL PRIORS command
Unknown parameter label: LX1
 Bengt O. Muthen posted on Friday, February 07, 2014 - 2:18 pm
I forgot that we don't yet have the option of giving priors for NEW parameters. One approach is to fix one lambda to 1 and let others have mean 1 and small-variance priors.
 Yoonjeong Kang posted on Friday, May 16, 2014 - 9:23 am
Dear Dr. Muthen,

When I took a look at the code in technical report regarding approximate measurement invariance, I realized that only a variance and mean of the reference group is fixed to 1 and 0. Variances and means for the other groups were freely estimated and also all parameters were freely estimated across groups.
With Maximum likelihood, this is not sufficient for model identification. In multiple group analysis, we additionally need to constrain at least one factor loading and intercept to be equal across group.

Q1. I wonder how the model is identified with Bayesian, particularly approximate measurement invariance case. My guess is that assigning strong prior distributions to the differences between factor loadings and intercepts with Bayesian works similarly to ML identification method (constraining at least one lambda and intercept). Could you let me know whether this is correct reasoning?

Q2. If my guess is correct in Q1,
with a two group CFA model with Bayesian estimation, I think that a model would be identified when (1) a variance and mean of the reference group is fixed to 1 and 0 , (2) and strong prior distributions are assigned to difference of only one factor loading and intercept (preferably of reference indicator). What do you think?
 Bengt O. Muthen posted on Saturday, May 17, 2014 - 11:23 am
Q1. Yes. If the prior is strong enough, you essentially have the case of exact invariance which we know leads to identification.

Q2. Right. Partial invariance like this is possible in BSEM as well.
 Dmitriy Poznyak posted on Tuesday, June 02, 2015 - 12:16 pm
Hi, Mplus team,

Could you please point me at the example input file for the approximate measurement invariance test for categorical indicators? I would greatly appreciate your input.

Thank you,
Dmitriy
 Bengt O. Muthen posted on Tuesday, June 02, 2015 - 4:46 pm
Mplus Web Note 17 has that input.
 Christopher Bratt posted on Friday, June 12, 2015 - 5:27 am
Hi

I recently asked Linda M. for an advice on how to define "approximate" in approximate measurement invariance. I use Bayesian estimations for the alignment.

Linda forwarded Tihomir's response:

- I would define approximate invariance as: non-invariance which is not statistically significant or is too small to be of practical importance, i.e., the differences between the parameters are either not statistically significant or too small in magnitude to be of practical significance.

Based on my experience with models using approximate measurement invariance, I still find it difficult to have a clear understanding of what is estimated to be "too smal in magnitude to be of practical significance" (as opposed to loadings that are estimated not to be approximately equal). Also, I don't think the reviewer of my paper would be satisfied with this explanation.

Do Tihomir or Bengt have an explanation that is short/clear enough to be used in a paper?

Kind regards, Christopher
 Tihomir Asparouhov posted on Friday, June 12, 2015 - 9:51 am
Christopher

The full technical definition is given in the "INVARIANCE ANALYSIS" section, page 5, in

http://statmodel.com/download/webnotes/webnote18.pdf

Due to multiple testing and uncertainty about what constitutes practical significance (which of course is subjective and can vary from person to person and application to application), we have used a complicated definition based on multiple tests and low p-values.

Note however - that this is all a by product of the alignment estimation, and the estimation does not depend on that definition. You can choose to define it and use it in a different way.

Our intent however was to define it as exactly that "non-invariance which is not statistically significant or is too small to be of practical importance".

This discussion is very similar in nature to the discussion of "approximate fit" in SEM.

On the other hand, it is conceivable that we need some precise cutoff values in standardized metric (something similar to how EFA cross loadings of less than 0.3 are often considered of lesser practical importance). That would be a good research paper. I don't have a value I can recommend at this point.

Tihomir
 Christopher Bratt posted on Friday, June 12, 2015 - 10:25 am
Tihomir,

thanks a lot for the detailed answer!

Keep up the good work.

Best,
Chris
 Lois Downey posted on Sunday, November 08, 2015 - 5:52 pm
I have some very elementary questions about testing for measurement invariance with BSEM. I'm testing a model with one factor, four ordered categorical indicators, and two groups.

1) Would you please explain the relationship between the "MODEL=xxx" in the ANALYSIS command and the specification of MODEL PRIORS with a DIFF instruction? It seems to be possible to indicate "MODEL=SCALAR" in the ANALYSIS command, either with or without using a MODEL PRIORS statement with a DIFF instruction (with different results, depending upon whether the MODEL PRIORS statement is used). Similarly, I can indicate "MODEL=ALLFREE" in the ANALYSIS command, either with or without a MODEL PRIORS statement with a DIFF instruction (again giving different results, depending upon the presence or absence of the MODEL PRIORS statement). Can you explain the differences between these four combinations of commands?

2) I expected a MODEL=SCALAR indication in the ANALYSIS command, to produce unstandardized loadings and thresholds that were equivalent for my two groups, but it did not. What does MODEL=SCALAR mean in BSEM?

Thanks!
 Bengt O. Muthen posted on Monday, November 09, 2015 - 4:36 pm
1) BSEM taken together with DIFF is a advanced technique and in early stages of learning about it I would recommend using only the input style of UG ex 5.33. This uses MODEL=ALLFREE and MODEL PRIORS with DIFF. In early learning stages I would not use Model=scalar together with DIFF.

2) We would have to see your output to say, but in early learning stages I don't recommend getting into Model=Scalar together with BSEM in the sense of using DIFF.
 Lois Downey posted on Monday, November 09, 2015 - 5:44 pm
OK. Thanks very much.

The example shows how to set the loading differences to near-zero. But I don't see that it shows how to set the threshold differences to near-zero. Is there a way to do that with ordered categorical variables? I made a feeble attempt, but got an error message indicating that the DIFF option is not available for polytomous items. I'm not sure whether to interpret that as an indication of a syntax error on my part, or an indication that it's impossible to manipulate the differences in thresholds between the groups.
 Bengt O. Muthen posted on Wednesday, November 11, 2015 - 11:17 am
Bayes with DIFF for thresholds of polytomous items has not yet been implemented in Mplus.
 Lois Downey posted on Friday, November 13, 2015 - 11:59 am
OK. Got it! Disappointing, but I know that enhancements such as this take time.

I've now used the Bayes estimator to test a two-group model (four factors, 13 ordered categorical indicators) that allows a small amount of variance around zero for the cross-loadings[~N(0,0.005)] and a small amount of variance around zero for the between-group differences of the primary loadings [Do(1,13)DIFF(lam1_#-lam2_#) ~N(0,0.0001)].

The PP p-value for this model = 0.146 (95% CI = -27.815, 93.120). Would it be reasonable to interpret these results as suggesting that the model exhibits METRIC invariance -- even if scalar invariance can't be tested with indicators of this type, using Bayesian modeling?

Thank you.
 Bengt O. Muthen posted on Friday, November 13, 2015 - 5:39 pm
Yes.
 Lois Downey posted on Thursday, February 11, 2016 - 5:13 pm
In your 11/11/15 response to my 11/9/15 post (above), you indicated that it isn't currently possible to use the DIFF command to constrain between-group differences in thresholds to small non-zero values when the indicators are polytomous. I've now recoded my ordinal indicators into dichotomies and tried the model again.

I have two groups, four factors, and 13 indicators and want to test the fit of a model in which approximate scalar invariance is invoked. I used the following command to limit the size of between-group differences in thresholds:
Do(1,13)DIFF(tau1_#-tau2_#) ~N(0,0.005);

However, the resulting model produced what I would consider to be LARGE between-group differences in some of the thresholds. For example, for one indicator, one of the groups had a threshold that was 41% higher than the threshold for the other group. Would you expect a difference that great when the second parameter in the DIFF statement is 0.005? Or does this suggest that I've made an error somewhere else in the specification?

(I'm also allowing small non-zero values for cross-loadings and residual covariances, and small non-zero between-group differences in primary loadings. The estimated coefficients seem to reflect those constraints appropriately.)

Thanks for you help!
 Bengt O. Muthen posted on Friday, February 12, 2016 - 3:00 pm
The large difference can be due to the fact that you have a large sample and the data says that those thresholds are indeed different.
 Lois Downey posted on Tuesday, February 16, 2016 - 7:08 am
I would have expected a case such as the one you've mentioned (thresholds that are actually different between the groups) to have produced a model in which the thresholds were similar between groups (thus conforming to the priors I specified), but the fit was poor.

Instead, the model that was produced had good fit (PP p-value = 0.329), but large between-group differences for some of the thresholds.

Is it my large sample size (3,944 cases) that is responsible for producing this pattern, rather than the one I expected, when the thresholds are empirically different? (As is probably apparent, I don't really understand the meaning of the second parameter in the DIFF statement).

Thanks!
 Lois Downey posted on Tuesday, February 16, 2016 - 7:31 am
p.s. I can now see that looking at the percentage difference between groups in thresholds is not really appropriate -- given that a large percentage difference could obtain for a very small absolute difference when the thresholds are near zero. So what is the best way to evaluate whether differences between thresholds are "close enough" to qualify as "approximately equal"?
 Bengt O. Muthen posted on Tuesday, February 16, 2016 - 6:42 pm
For any reasonably large sample, the data speaks up when the prior is wrong (too close to zero), so I would expect what you are seeing.

If you don't know the meaning of DIFF you need to study the writings on it.

Qualifying approx equal is hard. One aspect is: how different is the ordering of factor means from what you get by full scalar invariance.
 Lois Downey posted on Tuesday, February 16, 2016 - 10:00 pm
Since I have difficulty understanding the writings on DIFF, let me ask another question. I reran the test for approximate scalar invariance, but this time, specified a much tighter variance for the differences between thresholds:

Do(1,13)DIFF(tau1_#-tau2_#) ~N(0,0.00001);

Priors for cross-loadings were ~N(0,.005).
Priors for difference in lambdas were ~N(0,.005).
Priors for residual covariances were ~IW(0,1000).

With these specifications, the unstandardized values for the thresholds were exactly equal (to 3 decimal places) for the two countries.

Other parameters appear to be within a reasonable range: cross-loadings small and non-significant (largest value= 0.099); residual covariances small and non-significant (largest value = 0.026); between-country differences in primary loadings small (largest difference = 0.070).

For this model, the PP p-value was 0.291. Are the parameter values summarized above, in combination with this p-value, sufficient to conclude that the two groups show approximate scalar invariance? Or is this still not enough information to arrive at such a conclusion?
 Bengt O. Muthen posted on Wednesday, February 17, 2016 - 4:38 pm
You need to read about the analysis strategies wrt prior variances and the use of PP p-values in appendix A of the paper on our website (except skip the residual covariance steps):

Asparouhov, T., Muthén, B. & Morin, A. J. S. (2015). Bayesian structural equation modeling with cross-loadings and residual covariances: Comments on Stromeyer et al. Journal of Management, 41, 1561-1577.
 Diep Nguyen posted on Saturday, April 30, 2016 - 9:00 am
Dear Dr. Muthen and colleagues,
I'm running Bayesian approximate measurement invariance with 65 groups (N=17,000) (pls see below code) but the model didn't converge.

analysis: type = mixture;
estimator = bayes;
!point = mean;
Bconvergence=0.01;
Biterations=500000 (20000);
processors = 8;
chains is 8;
bseed 100;
model = allfree;

Could you please advise me how to resolve this issue?
Thank you so much!
Diep
 Linda K. Muthen posted on Saturday, April 30, 2016 - 9:03 am
Please send the output and your license number to support@statmodel.com.
 Peter Hilpert posted on Monday, June 20, 2016 - 4:41 pm
Dear Drs. Muthen

I have a question about the two-step BSEM procedure (4 items, 35 nations). First model is clear (all loading and intercepts freely estimated):

ANALYSIS:
MODEL IS allfree;...

MODEL:
%overall%
DC by dc01-dc04* (lam#_1 - lam#_4);
[dc01 - dc04](nu#_1 - nu#_1);
DC@1;
[DC@0];

MODEL PRIORS:
DO(1,4) DIFF(lam1_# - lam35_#) ~ N(0 0.01);
DO(1,4) DIFF(nu1_#-nu35_#) ~ N(0, 0.01);

Outcome indicates that some loadings and few intercepts are significantly different than the average loading and intercept. This indicates a partial measurement invariance model, isn't it? Thus, the second model should put all loadings and intercepts are fixed to be approximately equal by default. How can I do that (as removing 'allfree; is not the solution).

Second, I can see how I can set the loadings free for an item:

MODEL:
%overall%
DC by dc01-dc04* (lam#_1 - lam#_4);
[dc01 - dc04](nu#_1 - nu#_1);
DC@1;
[DC@0];

%c#12%
DC by dc01 (lam12_1);

But I am not sure how to set a specific intercept free, as the following syntax does not work:

%c#15%
[nu15_1];

Thank you!
Best, Peter
 Bengt O. Muthen posted on Tuesday, June 21, 2016 - 10:48 am
I don't understand what you mean by saying:

"Thus, the second model should put all loadings and intercepts are fixed to be approximately equal by default."

Your first input already has all loadings and intercepts held approximately equal.

I would go to the second input you show where you let particular loadings and intercepts be free. You say you get it to work for the loading but not for the intercept. We need to see your full output to tell what's wrong - send to Support along with your license number.
 Freya Glendinning posted on Friday, November 18, 2016 - 3:00 am
Dear Dr Muthen,

I am thinking about running a Bayes measurement invariance analysis on a 3-factor scale measured at 3 time points.

I have an input example from the Mplus WN .17 which looks at 9 items measured at 8 time points.

Are you aware of any examples of multiple factors across time?

Thank you very much!
 Bengt O. Muthen posted on Friday, November 18, 2016 - 2:06 pm
No, but it is the same principle.
 Tom Bailey posted on Monday, April 16, 2018 - 6:25 am
Dear Dr Muthen

I am trying to run an 'approximate measurement invariance' model for some longitudinal data I have (albeit it is currently in wide format). I've pasted the model below (some bits abbreviated due to word count) and was hoping you might be able to advise what I'm doing wrong to get the message...
THE MODEL ESTIMATION DID NOT TERMINATE NORMALLY.
THE PSI MATRIX IS NOT POSITIVE DEFINITE.

Cheers

Tom

ANALYSIS: ESTIMATOR IS BAYES;

Bconvergence=0.01;

Biterations = 500000 (20000);

processor is 8;

chains is 8;

bseed 100;

MODEL:
SDQT1 BY BDEMOTA0*(L1)
BDCONDA0 (L2)
BDHYPEA0 (L3)
BDPEERA0(L4)
BDPROSA0(L5);
SDQT2 BY CDEMOTA0* (L1)
CDCONDA0(L2)
CDHYPEA0(L3)
CDPEERA0(L4)
CDPROSA0 (L5);
SDQT3 & SDQT4 etc. etc.

EDEMOT00 WITH BDEMOTA0 CDEMOTA0 DDEMOTA0 ; !allow correlated residuals across time
etc. etc.

Model priors:
DIFF(L1-L5) ~ N(0, 0.01);
 Bengt O. Muthen posted on Tuesday, April 17, 2018 - 4:01 pm
Check the V8 UG description of DIFF on pages 779-780.
 Dana McCoy posted on Wednesday, August 22, 2018 - 8:07 am
Hello,

I'm wondering if there is an option for me to save the MCMC chain when using ESTIMATOR = BAYES. We would like them to calculate posterior prediction intervals. Moreover, many of the new Bayesian fit indices (WAIC and approximate LOO-CV) would require me to have access to this information. Unfortunately, I do not see an option for this in the user's manual.

Thank you very much for your help.
 Tihomir Asparouhov posted on Thursday, August 23, 2018 - 9:13 am
You can save the MCMC generated model parameters using
savedata:bparameters=1.dat;
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: