Message/Author
 Justin Jager posted on Thursday, December 18, 2008 - 9:13 am
I have a parallel process, piece-wise growth model of self-esteem and risky behavior, which includes an intercept and two growth pieces.

I wish to examine differences in conccurent correlations across growth piece/time. The concurrent covariances between the 3 growth factors are: -.027, -.016, and -.015; while the concurrent correlations are: -.048, -.236, and -.252. When I constrain the covariances to be equal across all three growth pieces the combined covariance is .018, but (due to dramatic std dev differences across the growth factors) the stdyx estimates/correlations still vary dramatically: -.028, -.232, -.256.

Due to these std dev differences comparing covariances is like comparing apples to oranges, I would like to compare via model comparison the standardized estimates as they account for differences in std dev. Is there a way to do this in Mplus?
 Bengt O. Muthen posted on Thursday, December 18, 2008 - 1:58 pm
Yes, by using Model Constraint. In the Model command you give labels to the parameters involving the covariances and variances of the growth factors. In Model Constraint you define the "new" parameters (see UG):

corr1 = cov/(sqrt(v11)*sqrt(v12));

etc

and then say

corr2 = corr1;
corr3 = corr1;
 Justin Jager posted on Thursday, December 18, 2008 - 3:34 pm
Fantastic! Thank you. I have one follow-up question though. Using the following syntax:

Irisk with Islfest(p1);
L1risk with L1slfest(p2);
L2risk with L2slfest(p3);

Irisk(p4); Islfest(p5);
L1risk(p6); L1slfest(p7);
L2risk(p8); L2slfest(p9);

Model Constraint:
NEW (corr1*-.048);
NEW (corr2*-.240);
corr1 = (p1/(sqrt(p4)*sqrt(p5)));
corr2 = (p2/(sqrt(p6)*sqrt(p7)));

corr2 = corr1;

I got the following output:
CORR1 -0.058 0.021 -2.794 0.005
CORR2 -0.058 0.021 -2.794 0.005

So, I have gotten to the point of calculating the corrs and constraining them to be the same, but how do i determine if constraining them leads to a poorer fit? I tried to run the model with the corrs unconstrained but I got fatal errors (I assume by design or neccessity). I then tried some other things, but nothing worked. Thanks!
 Bengt O. Muthen posted on Friday, December 19, 2008 - 6:40 am
What you did looks right. The unconstrained model is your original one - no need to use Model Constraint. But leaving out corr2=corr1 should also work. Perhaps your variances weren't positive. If you are interested in knowing why your attempt failed, please send your input, output, data and license number to support@statmodel.com.
 Michael Spaeth posted on Monday, August 03, 2009 - 4:05 am
I have a parallel growth process model and I want to compare covariances between growth factors across two groups via multiple group models. However, growth factor variances are different across both groups and I am therefore concerned about the validity of the tests of covariance equalities in multiple group.
Should I use the approach described here instead?

Regards Michael
 Linda K. Muthen posted on Monday, August 03, 2009 - 10:34 am
You can either hold both variances and covariances equal or use MODEL CONSTRAINT to compare correlations.
 Michael Spaeth posted on Monday, August 03, 2009 - 11:01 am
Thank you!
For model constraint, can I use the syntax like in the postings above?, e.g.

model low:
i with sd (p1);
i (p2);
sd (p3);

model high:
i with sd (p4);
i (p5);
sd (p6);

model constraint:
New (corr1);
New (corr2);
corr1 = (p1/(sqrt(p2)*sqrt(p3)));
corr2 = (p4/(sqrt(p5)*sqrt(p6)));

corr1 = corr2;

Second question, should one use Chi-Square difference test or "Model Test" (Wald-Test) when using model constraint? I get slightly different results for both procedures.
 Linda K. Muthen posted on Monday, August 03, 2009 - 11:16 am
The syntax you show is good.

The tests are asymptotically equivalent. You can use either.
 Michael Spaeth posted on Monday, August 03, 2009 - 11:57 am
ok, thank you very much for your help. Btw, I guess I can use this helpful model constraint option to compare STDXY-Betas in multiple group too?
 Linda K. Muthen posted on Tuesday, August 04, 2009 - 9:23 am
Yes.
 Thomas Rodebaugh posted on Wednesday, June 27, 2012 - 11:57 am
we are attempting to use the input that is quoted above ( Michael Spaeth posted on Monday, August 03, 2009 - 11:01 am ) in an analysis using wlsmv. although we have successfully constrained the correlations, our attempt to test the effects of constraining, we are met with:

*** ERROR in ANALYSIS command
DIFFTEST is not available in conjuction with nonlinear constraints through
the use of MODEL CONSTRAINT. Request for DIFFTEST is ignored.

so, my question is: is it not possible to test constraining the correlations in this manner using difftest? if that is correct, is there another way (other than, for example, saving factor scores and handling the tests in ML instead).

thanks in advance for any help!

best,

tom
 Bengt O. Muthen posted on Wednesday, June 27, 2012 - 9:02 pm
Try ML estimation.