Multiple group LGM -- some questions
Message/Author
 Peggy Clements posted on Monday, May 01, 2006 - 2:04 pm
I have some questions about conducting multiple group analyses with growth modeling.

Ultimately, I want to examine how family income's intercept and slope are associated with parenting's intercept and slope over three waves of data, and whether these associations vary by a family's poverty status. In other words, are increases in family income more beneficial for parenting in families that are initially poor than for families that aren't poor. Right now, however, I'm just trying to figure out how to understand multiple group analysis of an unconditional LGM.

In light of the fact that I am grouping families based on their initial income status, I expect that the intercepts and slopes will vary by group. My questions are:

1.
I have conducted a multiple group LGM for three waves of family income data (there are 5 groups).

GROUPING IS group (1=g1 2=g2 3=g3 4=g4 5=g5);
...
MODEL: i s | income1@0 income2@1 income3@2;

Based on my reading of pg. 303 in the user's guide[“The means and intercepts of continuous latent variables are fixed to zero in the first group and are free and not equal across the other groups as the default.”], I had expected that the means of the intercept and slope (because they are continuous latent variables) would be fixed to zero for group 1, but this wasn't the case. In fact, each group has a non-zero mean for the intercept and slope. What am I missing?

2.
If the default is to allow the means of continuous latent variables (i and s) to be free and not equal across groups, and I want to demonstrate that they *are* different (compared to they null hypothesis that they are the same across gruops), would would the syntax in the group-specific model command be? Or, is the more appropriate way to examine this question to focus on the path coefficients from income to parenting rather than the fact that the means of the intercept and slope vary across groups (especially since there is every expectation that intercepts are going to vary, since that's the basis of the group formation)?

3.
Finally, if I want to demonstrate that the mean intercept of group1 is significantly different from the mean intercept from group 3 (or any other group), is the appropriate way to do this to take the difference in the estimates and divide it by the standard error of the difference?

Thanks, in advance, for any help.
 Linda K. Muthen posted on Monday, May 01, 2006 - 2:21 pm
1. The text you refer to does not refer to a multiple group growth model. With a multiple group growth model, the means of the growth factors are free in all groups.

2. You can use a chi-square difference test to compare a model where the means of the growth factor(s) are free across groups to a model where the means of the growth factors are constrained to be equal across groups.

3. You can also use the chi-square difference test to do this.
 Peggy Clements posted on Monday, May 01, 2006 - 2:58 pm
Just to confirm the correct syntax for constraining the slope growth factor to be equal across groups... would it be

model: i s | inc1@0 inc2@1 inc3@3;
s (1);
model g2: s (1);
model g3: s (1);

etc. for all groups?
 Linda K. Muthen posted on Monday, May 01, 2006 - 3:18 pm
You should only need to mention it is the overall MODEL command. You are referring to the variance of the slope growth factor. You would refer to the mean as:

[s] (1);
 Peggy Clements posted on Monday, May 08, 2006 - 1:51 pm
In the past, when I estimate a model that includes a standardized coefficient that is >1.0, the output includes a message that that psi matrix is not positive definite. I just ran a relatively complex model (in which I have estimated growth factors for 6 variables with panel data and also estimate a structural model using the growth factors) and 4 of the standardized coefficients (StdYX) are >1.0; 3 of these are quite a bit larger than 1 (ranging from 1.4 to 1.6). Why didn't I get an error message? I'm assuming that this is not an acceptable solution--Am I right?

Thanks.
 Linda K. Muthen posted on Monday, May 08, 2006 - 2:03 pm
Some standardized coefficients can be greater than one. There is a discussion of this in Karl'e corner at the LISREL website www.ssicentral.com. If you want us to look further into your particular problem, please send your input, data, output, and license number to support@statmodel.com
 Suet Ling Chong posted on Friday, May 23, 2008 - 7:09 am
I am comparing the unconditional and conditional growth curves of 2 groups. My outcome variable is continuous.

1. I could not find the syntax for comparing factor loadings across the unconditional models ie I wish to test for linear vs non-linear growth trends.

2. I have a number of time invariant covariates and want to test for equality of regression coefficients. Would the chi sq difference test apply as well?

3. To conduct the chi sq difference test, do I simply subtract the chi sq and degrees of freedom across models?

Thank you.

Suet Ling
 Linda K. Muthen posted on Friday, May 23, 2008 - 10:46 am
1. I assume that you mean a model with fixed linear time scores versus a model with some free time scores. To test if the free times scores are significantly different from their linear counterparts, you can subtract them and divide by the standard error of the free time score. Or you can do a chi-square difference test of the model with fixed time scores versus the model with free time scores.

2. Yes.

3. For non-robust estimators, yes. For others see instructions on the website or use the DIFFTEST option if appropriate.
 Liz Shulman posted on Saturday, May 24, 2008 - 7:15 pm
Hi Dr. Muthen,
I am estimating a quadratic growth model with multiple groups. However, rather than using the "i s q | y@0 y@1 y@2 y@3" syntax, I have set it up as a latent difference score model (with constant change and autoproportional change). I need to set it up this way because my final model involves another variable predicting the change in y.

I want to be able to use a chi square difference test to examine whether the growth parameters can be constrained to equality across groups (high and low SES). However, the default settings do not seem to allow me to set the mean slopes and levels to be equal across groups. Even when I label [slope] the same way in both model statements, the group 1 value is set to zero and the group 2 value for [slope] is estimated.

Is there some way to override this default?

Thank you!
 Linda K. Muthen posted on Sunday, May 25, 2008 - 6:48 am
I need to see what you are doing. Please send your output and license number to support@statmodel.com.
 Christoph Weber posted on Wednesday, August 05, 2009 - 7:19 am
Dear Dr. Muthen,
I want to run a multiple group LGA with multiple indicators. If I'm doing a multiple group LGA without multiple indicators I get mean values for all growth factors and for all groups (eg. intercept mean for boys and girls). But if I use a multiple indicator LGA Mplus sets the intercept mean of group 1 equal to 0. I guess that's the caused by the default settings. How can I override this default?

Thanks.

Christoph Weber
 Bengt O. Muthen posted on Wednesday, August 05, 2009 - 12:05 pm
You fix the intercepts/thresholds at zero and mention [i]. Be sure to check that you get the same number of parameters and log likelihood.
 Katharina Diener posted on Tuesday, February 23, 2010 - 12:26 pm
hello,
i am also conduction a multiple indicator LGM (with multiple groups) and would like to estimate the mean of the intercept growth factor. Therefore I fixed the intercepts of the factor indicators to 0 & freely estimated the mean of the intercept growth factor [i]. If I am doing a single group analysis everything works fine. However, I as soon as I use the "grouping-option" the intercept and slope mean are set to 0 and the model doesn't work.
Am I missing something important?
Thank You very much for your help.
 Linda K. Muthen posted on Tuesday, February 23, 2010 - 4:18 pm
For multiple indicator growth, fixing the intercepts to zero is overly restrictive. You should use the model shown on page 546 of the user's guide and add:

MODEL g1:
[i s];
 Sofie Wouters posted on Friday, March 26, 2010 - 4:32 am
I'm also doing a multiple group analysis with a growth model of self-concept (4 waves). In fact, I am interested in differences in means and slopes between six groups. However, I also want to add covariates (I want to control for achievement) which means that I can no longer constrain the means of the intercept or slope across groups because they are only available in tech4 output, or is there a way to constrain them anyway?
If not, than I should find another way to control for achievement. Do you have some suggestions? I thought about parallel processes, but I do not know if this what I want to do (maybe even adding achievement as a covariate is not what we want). We want to model the growth for academic self-concept after controlling for achievement and we want to compare this growth across groups. Could I do this by first regressing self-concept on achievement for each time point and then take these 4 residuals as indicators for my latent growth factors?

 Linda K. Muthen posted on Friday, March 26, 2010 - 9:36 am
When you have a conditional model, the intercepts rather then the means are of interest. You should look at the differences in the intercepts across groups. You might also want to compare the regression coefficient involving achievement across groups.
 Sofie Wouters posted on Sunday, March 28, 2010 - 12:44 am
Thank you, Linda, for your quick response!
If intercepts are of interest how do I interpret these? Because they're not the same as the means I think... and can you represent them in a graph in MPlus, because I only get graphs with means?
When you refer to the regression coefficient of achievement, do you mean the effect of achievement on the intercept and the slope and compare this across groups?
Finally, might I infer from your answer that you do not think it necessary to work with residuals to control for achievement?
 Linda K. Muthen posted on Sunday, March 28, 2010 - 10:04 am
The intercepts of the growth factors are interpreted as in any linear regression. When you regress the intercept growth factor on achievement, you receive an estimate of the intercept of the intercept growth factor and a regression coefficient. The same is true for the slope growth factor. When you regress it on achievement, you obtain an intercept for the slope growth factor and a regression coefficient. It is these regression coefficients that I refer to. The model estimated values that are used to compute the means for the PLOT command. I don't see any need to work with residuals.
 Sofie Wouters posted on Sunday, March 28, 2010 - 11:26 am
 Sofie Wouters posted on Wednesday, March 31, 2010 - 1:00 am
Sorry for asking again, but I'm still not clear on how I can interpret my constraints. I do understand how to interpret the effects of my covariates on my slope and intercept, but then I want to add the constraints to my model, to see if there are differences between my groups in intercept and slope. However, because I have a conditional model I can only constrain the intercepts of my intercept and slope, but what would the fact that the intercepts of the intercept/slope are equal across groups mean? Could it not be possible that I find that the intercept of my intercept is equal across two groups when in fact the 'real' mean (total effect on the intercept) found in TECH 4 is not equal across these groups? Maybe I'm getting something wrong here...
 Bengt O. Muthen posted on Wednesday, March 31, 2010 - 11:28 am
Maybe it is helpful for you to think of the analogous situation in ANCOVA. In ANCOVA you have y, x1, and x2, where y is the posttest, x1 is the pretest, and x2 is the group (tx/ctrl). ANCOVA does not look at the y mean differences across groups as ANOVA does, but adjusts for pre-existing differences in x1 means, and considers the intercept as the tx effect. Think two parallel regression lines (assuming group-invariant slopes on x1) with y on the y axis and x1 on the x axis - the intercept is the difference.

In your case, i or s correspond to y, achievement corresponds to x1, and group corresponds to x2.
 Sofie Wouters posted on Tuesday, April 20, 2010 - 5:45 am
Thank you, this makes things more clear for me.
However, I'm doubting if I should add achievement as a covariate, because I do not think this is enough to answer my research questions. I would like to compare the self-concept of equally able (or achieving) students across different groups and across time. Is it dan okay to just add achievement as a covariate in each group-specific model? Or can you suggest other methods of analysis?
 Linda K. Muthen posted on Tuesday, April 20, 2010 - 8:58 am
Nothing else comes to mind.
 Gregory Kirkner posted on Monday, May 03, 2010 - 12:43 pm
I have some questions on multiple group LGMs. I want to fit LGMs for continuous observed variables measured at 3 equally spaced time points. Participants were randomly assigned to one of four intervention groups, and I indicate these groups using the GROUPING option. I am interested in comparing LGM attributes between these groups.

Since my time points are equally spaced, I began by using fixed time scores (O, 1, 2) in the overall model statements. However, plots and model fit output for some of the observed variable LGMs indicate poor fit. I also received some PSI warnings. Upon closer inspection of the PSI offending groups, it was clear that their LGMs were non linear. Therefore, I experimented with different time score approaches (e.g., free time scores, logarithmic time scores, etc.) for groups where 0, 1, 2 scores resulted in poor fit. The resultant models fit much better.

Is this an acceptable approach when addressing differential or non-linear growth across multiple groups with only three measurement points? If so, can I specify differential time scores by using group-specific model statements? Also, I assume that comparing mean slopes and intercepts across different time score groups would not be advised, correct?

As an alternative approach, could I use added growth or quadratic models? Or would these methods not be advised given more than two groups and measures at only three time points?

Thanks!
 Linda K. Muthen posted on Tuesday, May 04, 2010 - 9:43 am
You should fit a growth model in each group separately. If the same model does not fit in each group, comparisons across groups should not be made.

With only three time points, you options are limited. You have only one degree of freedom so if you free one time score, model fit cannot be assesed. You can fix logarithmic time scores as you suggest. You need four timepoints for a quadratic growth model.
 Richard E. Zinbarg posted on Wednesday, April 20, 2011 - 8:37 pm
Hi Linda or Bengt,
I am going to be analyzing data from an intervention study with stroke victims with chronic language impairments. Our research team is very interested in changes over specific time intervals - specifically pre to post and post to follow-up - so my plan is to use a latent difference score approach. Given that the patients, by definition, have chronic impairments, I think it is reasonable to assume stationarity in the untreated control group and thus am planning on fitting a proportional change model. I was also thinking I would do a multiple-group analysis comparing the treated to the untreated controls but what isn't clear to me is which parameters I would expect to be different in the treated group versus the untreated group - would it be in the coefficients of the autoregressive effect of the pre-treatment score to the first latent difference score? If I weren't going to set it up as a multiple group analysis but rather just entered treatment as a dummy-coded time-invariant predictor it seems clear to me that I would test the treatment effect by testing whether the path from the treatment variable to the first latent difference score were significantly different from zero but I am not quite seeing what the test of the treatment effect would be in the multiple group approach.
Thanks in advance for any light you can shed on this for me!
 Bengt O. Muthen posted on Thursday, April 21, 2011 - 8:26 am
If you do it as a dummy-coded covariate you are saying that the first latent difference score has different means in the treatment and control groups. So that's what you want to mimic in the multiple-group analysis. The latter, of course, can handle many other group differences such as different slopes of the latent difference score regressed on the pre-treatment score.
 Richard E. Zinbarg posted on Thursday, April 21, 2011 - 8:47 am
Thanks Bengt. Perhaps my understanding of the latent difference score model is incorrect but my understanding of the readings I have done on the topic was that one does not directly estimate the means of the latent differences in the LDS approach. Rather, it appeared to me that one had to estimate those means by hand by applying the parameter estimates for (1) the regression of the first latent difference score on the pre-treatment score and (2) the loading of the first latent difference score on the constant change factor (which in the proportional change model I plan to fit would equal zero) into the equation for the latent difference score. For example, in the Mplus code provided in their Appendix A by King, King, McArdle, Shalev and Doron-LaMarca (2009) they constrain the means and variances of the latent difference scores to equal zero. Are those constraints unnecessary and I could instead run one model in which the difference score means are freely estimated and a second model in which the difference score means are constrained to be equal across the groups and then test the difference of those two models?
 Bengt O. Muthen posted on Thursday, April 21, 2011 - 9:43 am
I am not up on latent difference score modeling, but the fact that King et al. restrict the means at zero (which I assume is for a single group) doesn't mean that you couldn't estimate a mean difference when having two groups. You fix it at zero in a reference group and let it be freely estimated in the other group (to represent the difference - as usual).
 Richard E. Zinbarg posted on Thursday, April 21, 2011 - 10:23 am
Many thanks for the speedy reply Bengt! That makes a great deal of sense to me, I will give it a try.
 Marissa Ericson posted on Thursday, April 21, 2011 - 11:12 am
I am rather new to mplus and have been teaching it to myself over the last few months. I am trying to modify a latent variable cross lagged script to include twin groups. Here is the base model:
EF1 BY secs1* (L1)
err1*(L2)
per1 * (L3)
cc1 * (L4)
nogo1 * (L5);

EF2 BY secs2* (L1)
err2* (L2)
per2 * (L3)
cc2 * (L4)
nogo2* (L5);

ASB1 BY del1* (L6)
agg1* (L7)
caq1*(L8)
cd1 *(L9)
cps1 * (L10);

ASB2 BY del2* (L6)
agg2* (L7)
caq2 * (L8)
cd2 *(L9)
cps2 * (L10);

ASB2 ON ASB1 EF1;
EF2 ON EF1 ASB1;
EF1 WITH ASB1;EF2 WITH ASB2;
I need to add the biometric decomposition for each of the latent factors but even just a script/example for a univariate latent variable would be helpful! Thank you in advance!
 Bengt O. Muthen posted on Friday, April 22, 2011 - 7:46 am
You can start from UG ex 5.18 or 5.21, simply replacing the two y's with two factors. See also UG ex 7.29 where this is done for categorical factor indicators.

http://www.statmodel.com/geneticstopic.shtml

including the Mx-translated scripts at the
GenomEUtwin project.
 Jean  posted on Tuesday, May 17, 2011 - 9:44 pm
Hi,

I am doing multi group analysis. I have four groups, the sample sizes are 450, 200, 99, and 190. As you see, one group is small (n=99) compared to the other groups. Will it be problematic when conducting multi group analysis?

The other question is if I can add exogenous variables predicting intercept and slope in only two of four groups. That is, when conducting multiple analysis, the model should be same across groups? Or can I add predictors in some of groups, or add different predictors across groups?

Thanks!
 Linda K. Muthen posted on Wednesday, May 18, 2011 - 9:47 am
The smaller group will have less power and influence but other than that there is no problem.
 Jean  posted on Wednesday, May 18, 2011 - 8:36 pm
Thanks, Linda, for your quick response.

Do you have any idea on my second question?

Thanks!
 Bengt O. Muthen posted on Thursday, May 19, 2011 - 9:02 am
I would include all predictors in all groups and simply report that some are not significant in some groups.
 Sandra Kristina Gebauer posted on Monday, November 28, 2011 - 1:52 am
Dear Dr. Muthen,
I am doing multi group analysis LGM of reading skills with two groups and two predictors (cognitive abilities and socio-economic status). A reviewer asked whether I might really say that I am controlling the predictors across groups. He argues that because I did not constrain the regression coefficients to be equal across groups that I am only referring to group-specific values of the predictors.
As to my knowledge, the LGM results are based on achievement values with the influence of my predictors partialed out (intercepts). It seems reasonable that the regression coeffients of my predictors vary across groups; I do not have any hypothesis about this influence. Therefore, I assumed that I am allowed to compare the achievement development across groups saying that I controll cognitive abilities and SES.
Am I missing something?
 Linda K. Muthen posted on Monday, November 28, 2011 - 11:37 am
I think the reviewer is making the point that, like ANCOVA, if the slopes of the two groups are not the same, the interpretation of the intercept is not constant across the values of the covariate.
 Sandra Kristina Gebauer posted on Tuesday, November 29, 2011 - 6:16 am
Many thanks for your quick response! Do you suggest any other method of analysis in this case? Is there a better option to controll predictors in multi-group LGM? Unfortunately, matching is not possible in this sample.
Thank you!
 Linda K. Muthen posted on Tuesday, November 29, 2011 - 11:47 am
I can't think of any alternative. You can test whether the regression coefficients are equal across the groups using difference testing or MODEL TEST. If they are, you can compare the intercepts. If they are not, then you should not do this.
 Sandra Kristina Gebauer posted on Wednesday, November 30, 2011 - 2:09 am
Thank you for your help! I will give it a try.
 Sarah Stoddard posted on Friday, February 24, 2012 - 4:11 pm
I am doing multigroup LGM with two groups (low future orientation and high future orientation) and 4 distal outcomes. I am trying to run a fully constrained model and compare it to a model in which the structural paths are released. From the documentation, I see that the intercepts, thresholds and factor loadings are held equal by default, but that I need to fix the residual variance, factor means, variances, covariances, and regression co-efficients. Below is what I have done so far. Is this correct? I think I am missing the covariances?

Variable:
Names are
id violob10 .... fut1di fut1di2;
Missing are all (-9999) ;

Usevariables are victmiz1 victmiz2 victmiz3 victmiz4 fut1di2 nvdel10 victim10 vioapr10 violb10 ;
grouping is fut1di2 (0=low 1=high)
Analysis: type=mgroup;

Model:
i1 s1 | victmiz1@0 victmiz2@1 victmiz3@2 victmiz4@3 ;
i1 (1); !variances held
s1 (2);
victmiz1-victmiz4 (3); !residuals fixed

nvdel10 on i1 s1 (4); !regression coeff
victim10 on i1 s1 (5);
vioapr10 on i1 s1 (6);
violb10 on i1 s1 (7);

model low: [i1 s1]; !mean fixed

 Linda K. Muthen posted on Saturday, February 25, 2012 - 9:20 am
The first thing you should do is estimate the growth model in each group separately to see if the same growth models fits well in both groups. If not, multiple group analysis would not make sense. If you proceed to multiple group analysis, you should first test the residual variances which are measurement parameters not structural parameters. Then test the structural parameters as shown adding the covariance between i1 and s1.
 Nicholas Bishop posted on Monday, December 30, 2013 - 12:30 pm
Hello,
I am grasping to understand the results from a multiple-group growth model with individually varying time-scores and a count outcome variable. Time was defined as age centered at an early point (50 years of age in a sample of older adults). The outcome measure was a count of physical limitations modeled with a Poisson distribution (COUNT are...).

For the oldest group with a mean age of 80 at initial measurement, the intercept was -1.21 and the slope was .673. When exponentiated, this translates into an initial count of .30 limitations with a slope of 1.96. Age was included as a covariate and had a value of -.06 (exp = .94) on the intercept.

To calculate the estimated number of initial limitations for an 80 year old, I would assume this is correct: =exp(-1.21 + (30*-.06))= .0526. With that said, it is not clear why the oldest group of adults would have such a low count of physical limitations.
 Bengt O. Muthen posted on Monday, December 30, 2013 - 1:22 pm
Need to see the full output and the sample counts at different ages to answer this - but, shouldn't you expect a positive slope on age instead of your -0.06?
 Nicholas Bishop posted on Monday, December 30, 2013 - 2:02 pm
The -0.06 was the estimate for the covariate when the intercept of physical limitations was regressed on age.

Previously I had centered the time scores for each cohort on the cohort-mean age, but in this round I centered the time scores for all cohorts on age 50. I am thinking I would take the mean intercept representing the predicted number of limitations for a given cohort at age 50, then add to that the product of the covariate estimate for age multiplied by the number of years I want to move out from age 50.

The model currently allows for the intercept and slope to be estimated freely for each cohort, meaning it is not an accelerated design. Thanks as always for your time.
 Bengt O. Muthen posted on Tuesday, December 31, 2013 - 8:14 am
I would look at the sample count distribution for the subjects of age 80 and compare that to the distribution based on the estimated count mean at age 80. If they don't match well, perhaps the growth model is off.
 Angela Nickerson posted on Wednesday, January 22, 2014 - 2:52 pm
Dear Dr(s) Muthen,
I am running a latent difference score analysis, modelling constant change, proportional change and cross-lagged paths to look at the temporal relationship between alcohol use and psychological symptoms over time. This involves four separate models, each looking at the relationship between alcohol use and one group of symptoms. While I can get two of the models to converge using the raw data, I have found that I need to standardize the other two in order to achieve convergence. I have tried dividing one and both of the variables in each model by a constant; however, even though the variances of the variables become more similar (and less than 10), I still get the error message relating to the psi matrix being non-positive definite. I have also tried centering the variables, with the same outcome. Do you see any problems with using standardized variables in this kind of analysis?

Angela
 Linda K. Muthen posted on Thursday, January 23, 2014 - 10:26 am
I would not standardize to avoid a convergence problem. I would try to determine the cause of the problem. I would also not standardize with a growth model.
 Cindy Huang posted on Monday, March 17, 2014 - 11:37 am
Dear Drs. Muthen,

I am doing a multiple group LGM with several predictors, and am wondering if I need to be interpreting the standardized or unstandardized betas for the results. The output is showing different significant predictors depending on whether I'm looking at the unstandardized or standardized results (there are more significant effects when looking at the standardized results). Can you please provide some clarification on this issue?

Thank you,
Cindy
 Linda K. Muthen posted on Monday, March 17, 2014 - 2:54 pm
Raw and standardized coefficients have different sampling distributions so their significance can vary. You need to decide which to use based on practice in your filed.
 xiaoyu posted on Thursday, April 17, 2014 - 4:28 pm
Dear Dr. Muthen,
I was running a multiple group LGM with robust estimators of MLR. The chi-square value of the multiple group is not equal to the sum of two univariate LGMs, but the DF is equal to the sum of the two univariate LGMs. Is this normal for MLR estimate?

I ran a mutiple group LGM before with ML estimate. Both the chi-square and DF of the multiple group are equal to the sum of the two univariate LGMs.

Thank you so much for your help!
 Linda K. Muthen posted on Friday, April 18, 2014 - 9:01 am
 xiaoyu posted on Tuesday, April 22, 2014 - 4:22 pm
Dear Dr. Muthen,

Thanks for your help. I have one more question. For the multiple group LGM comparison with covariates, is there any way to plot the graph after controlling for the covariates? I have the plot command (see below), but the plot is the one without any covariates even though these covariates are in my multiple group LGM models.

Thank you so much for your time!

Plot:
series=argue1(0) argue2(1) argue3(2) argue4(3) argue5(4);
 Linda K. Muthen posted on Wednesday, April 23, 2014 - 10:47 am
 Simone L. posted on Friday, November 21, 2014 - 4:54 am
Dear Dr. Muthen,
I´m running a multiple group LGM and I get a negative residual variance. So I fixed the variance of the variable to zero . But when exemaning the output, math9 remains still negative, while residual variance of math7 is zero - am I doing something wrong?

i s | math7@0 math8@1 math9@2;

model high:
math7@0;
model low:
math9@0;
Thanks and kind regards!
Simone
 Linda K. Muthen posted on Friday, November 21, 2014 - 5:21 am
 Stephanie Stepp posted on Tuesday, May 31, 2016 - 4:32 pm
I have a couple questions about an interaction I found using multigroup modeling. I am using the default estimator (ML) to examine differences in a mediational path among individuals with variation in a specific genotype (grouping variable). I have one dichotomous predictor (presence of maltreatment), continuous mediator (slope of emotional reactivity) and continuous outcome (personality pathology).

When comparing an unconstrained versus constrained model using a Chi-Square difference test, the path between the predictor and the mediator significantly differs by group, indicative of an interaction with genotype.

Questions:
1. How would you suggest plotting this interaction? Is it possible to do using multigroup modeling?
1a. I attempted to examine this model using an interaction term, allowing for the plot function. However, the interaction term is then only marginally significant and I am not sure why this would be. Syntax for the model is pasted below.
MODEL:
y ON m x z xz;
m ON z x xz;
MODEL INDIRECT:
y MOD m z (0, 1, 0.1) xz x;
PLOT: TYPE = PLOT2;
OUTPUT: Sampstat stdyx

2. If utilizing the multigroup model, is there a way to test if the total indirect effect also differs by group?

Thank you!
 Bengt O. Muthen posted on Tuesday, May 31, 2016 - 6:01 pm
1. The model (1) with z and xz as covariates is a bit different from the (2) multiple-group model. Unless you have group-varying residual variances you can make (1) be specified exactly the same as (2) with the same number of parameters. The results should then agree.

2. You can use Model Constraint to express the total indirect effect for each group and difference between groups in terms of model parameter labels. That difference is given a z-test.
 RuoShui posted on Monday, November 14, 2016 - 3:34 pm
Hi Dr. Muthen,

I have a conditional latent growth curve model with a series of covariates (recoded and centered). I would like to compare across two groups whether the initial status and slopes are different. But I don't think I can use, for example [i] (1) because from what I understand from other threads that the mean of the initial time point only equals the intercept of the intercept growth factor when all the other covariates are zero. If I want to compare the initial status of the whole sample between the two groups, how should I do this?

Thanks very much.
 Bengt O. Muthen posted on Monday, November 14, 2016 - 5:53 pm
Try centering the covariates in each group so that the group-specific [i] refers to them being at the group's covariate means.
 RuoShui posted on Monday, November 14, 2016 - 6:37 pm
Thank you. My covariates are dichotomous variables coded as 0 and 1 and another covaraite is SES which is standardized z-score. Do I still need to center the covariates? Could you please provide a hint of the syntax?
Thank you very much
 Bengt O. Muthen posted on Tuesday, November 15, 2016 - 5:28 pm
You want to ask this general analysis question on SEMNET.
 Daniel Lee posted on Wednesday, December 14, 2016 - 7:05 am
Hello,

I would like to construct a model where the intercept and slope of 3 LGMs predict one binary outcome.

Within the model command, I wrote 4 lines of code...

i1 s1 | x@0 x@1 x@2...(one for each LGM)

And...

i1 i2 i3 s1 s2 s3 on DV

For such analysis, should I include other statements within the model command? (e.g., model constraints? the intercepts and slopes needs to covary?)

I just want to make sure I'm not missing anything. Thank you!
 Bengt O. Muthen posted on Wednesday, December 14, 2016 - 12:09 pm
I think you mean

DV on i1 i2.....

The growth factors should covary.

Muthén, B., Khoo, S.T., Francis, D. & Kim Boscardin, C. (2003). Analysis of reading skills development from Kindergarten through first grade: An application of growth mixture modeling to sequential processes. Multilevel Modeling: Methodological Advances, Issues, and Applications. S.R. Reise & N. Duan (Eds). Mahaw, NJ: Lawrence Erlbaum Associates, pp.71-89.
 Mark Wade posted on Friday, December 22, 2017 - 8:01 am
I'm performing a latent growth model with known classes (multiple group analysis) using the knownclass feature with type=mixture. I have 3 known classes/groups and 3 waves of data collection. I'm using a Bayesian estimator.

I'd like to compare the means and variances of i and s across groups; but I'm unsure if there is an equivalent method to constraining and freeing parameters and doing a chi-square difference test across models as in the case of multiple-group analysis using MLR. Is there a way of testing group differences in the means and variances of i and s using a Bayesian estimator with the knownclass option and type=mixture?
.....
CLASSES = cg(3);
KNOWNCLASS = cg (Group=0 Group=1 Group=2);
MISSING ARE ALL (999);
ANALYSIS:
TYPE = MIXTURE;
ESTIMATOR = BAYES;
MODEL:
%Overall%
i s | DMSper8@0 DMSper12@1 DMSper16@2;
i s ON Gen BW;
%cg#1%
i s | DMSper8@0 DMSper12@1 DMSper16@2;
%cg#2%
i s | DMSper8@0 DMSper12@1 DMSper16@2;
%cg#3%
i s | DMSper8@0 DMSper12@1 DMSper16@2;
 Bengt O. Muthen posted on Friday, December 22, 2017 - 1:50 pm
Give parameter labels in the Model command and then use Model Constraint to define a new parameter, like:

Model Constraint:
new(diff);
diff = a-b;
 Lydia Zhang posted on Monday, April 02, 2018 - 2:04 pm
Hi Dr. Muthen,

I am trying to do latent growth curve modeling to see how developmental trajectories differ between two groups. I used wald chi-square test to compared the slope and intercept of the two groups controlling for a few covariates. I am wondering if the following commands are correct?

Model:
%Overall%
i s | y1@0 y2@1 y3@2 y4@3;
i s on covariates;

%N#1%
i s | y1@0 y2@1 y3@2 y4@3;
i(m1)
s(m2)
on covariates;

%N#2%
i s | y1@0 y2@1 y3@2 y4@3;
i(m3)
s(m4)
on covariates;

Model Test:
0 = m1-m3;
0= m2-m4;
 Bengt O. Muthen posted on Monday, April 02, 2018 - 3:33 pm
No, you should say

[i] (m1);

to refer to an intercept.
 Hillary Gorin posted on Wednesday, June 20, 2018 - 8:43 am
Hello,

Results of a multiple group analysis suggest that the variance of an intercept in a growth curve model is significantly different for men and woman.

How can one control for sex differences in variance of the intercept? How can one control for sex differences in variance of the slope? How can one control for sex differences in the correlation of the slope and intercept?

Thanks!
Hillary
 Bengt O. Muthen posted on Wednesday, June 20, 2018 - 12:00 pm
The multiple-group analysis does control for that in that it allows these gender differences.
 Hillary Gorin posted on Wednesday, June 20, 2018 - 12:42 pm
Hello,

Thank you for your response. Let me clarify what I mean.

After I have completed the multiple group analysis (and found sex differences), I will be running a parallel process growth curve model in which I need to control for sex.

Thus, what is the syntax for controlling for the variance of the intercept, the variance of the slope, and the correlation between slope and intercept?

Thanks!
Hillary
 Bengt O. Muthen posted on Wednesday, June 20, 2018 - 12:57 pm
Run a multiple-group parallel process model. That's a simple way to handle group differences in variances and covariances.
 Hillary Gorin posted on Wednesday, June 20, 2018 - 1:06 pm
Is there any syntax I could use to control for them? I have a very complicated model that is not converging as it is so I fear adding more complexities.
 Bengt O. Muthen posted on Wednesday, June 20, 2018 - 2:27 pm
You can simply add Grouping= to your process model. If you don't say anything in group-specific model statements, the default of measurement invariance is used.
 Hillary Gorin posted on Wednesday, June 20, 2018 - 2:48 pm
Ok, thank you.

So add: GROUPING = sex (0 = woman 1 = men);

Is that all?

One additional question. I was able to run my parallel process model (with count variables) in Version 8 yesterday.

Today, I am trying to run the analyses on another computer with version 7 and I keep getting this error:

MODEL INDIRECT is not available for analysis with ALGORITHM=INTEGRATION.

Do you know why I may get this in version 7, but not version 8?

Thanks!
Hillary
 Bengt O. Muthen posted on Wednesday, June 20, 2018 - 3:29 pm
Q1: Right

Q2: We added this feature in V8.
 Hillary Gorin posted on Wednesday, June 20, 2018 - 4:12 pm
Ok, thank you so much for your help!

Hillary
 Hillary Gorin posted on Thursday, June 21, 2018 - 9:59 am
I cannot use

GROUPING = asexera (0 = woman 1 = men);

Because I have count variables.

When I ran my multiple group analysis, I tested for significant differences in means, variances, and covariances.

Can I just account control for sex when the means are different? I can't imagine differences in the variances and covariances will create huge differences.
 Bengt O. Muthen posted on Thursday, June 21, 2018 - 3:45 pm
With count variables you can use Type=Mixture with Knownclass - which is not any harder really than Grouping.

If you expect only the means of DVs to differ by gender then of course you don't need a multiple-group approach.
 Hillary Gorin posted on Thursday, June 21, 2018 - 4:02 pm
Ok, thank you.

But if the variances and covariances differ, I need to use the type=mixture aproach?
 Nicole Tuitt posted on Tuesday, August 21, 2018 - 3:25 pm
I'm running a latent growth model with known classes. I have 2 known classes and 4 waves of data. I'm using a Bayesian estimator.

I'd like to compare the means and variances of i, s, and q across groups. I ran this syntax and got the error message below.

Model:
%overall%
i s q| srse1@0 srse2@1 srse3@2 srse5@3;
i s q on school3 school7;

%g#1%
i s q| srse1@0 srse2@1 srse3@2 srse5@3;
i(int1);
s(slope1);

%g#2%
i s q| srse1@0 srse2@1 srse3@2 srse5@3;
i(int2);
s(slope2);

Model Constraint:
new(diff1);
diff1=int1-int2;
...diff test for slope and quad

*** FATAL ERROR
VARIANCE COVARIANCE MATRIX IS NOT SUPPORTED WITH ESTIMATOR=BAYES.
PARTIAL EQUALITY BETWEEN TWO VARIANCE COVARIANCE BLOCKS.IF TWO PARAMETERS FROM TWO DIFFERENT VARIANCE COVARIANCE BLOCKS ARE HELD EQUAL THEN ALL THE PARAMETERS HAVE TO BE EQUAL IN THE TWO BLOCKS.
USE ALGORITHM=MH TO RESOLVE THIS PROBLEM.
 Tihomir Asparouhov posted on Thursday, August 23, 2018 - 8:59 am
By default the variance covariance for i s q is class invariant. In the above model you have made the diagonal entries class specific but the off diagonal are still held equal across class. You have to add the following command in each class to make the covariance also class specific
i s q with i s q;
 Nicole Tuitt posted on Thursday, August 23, 2018 - 6:00 pm
Thank you very much!

Nicole
 Carlos Sierra posted on Thursday, May 30, 2019 - 9:47 pm
Hello,

I am planning on running a multiple group LGM. I realize that in order to eventually compare the slope and intercept means between groups the models have to be identical. I am unsure what this means when there are free time scores in my model (I.e., y1@0 y2*.20 y3*.40 y4*.60 y5*.80 y6@1). Should these free time scores be somehow fixed before running the multiple group analysis? If so how should this be done when the estimates of the free time scores may be different between groups (when the groups are modelled separately).

 Bengt O. Muthen posted on Sunday, June 02, 2019 - 11:16 am
You should apply equality restrictions on the time scores and this can be done only when using the BY approach to growth modeling.
 Carlos Sierra posted on Sunday, June 02, 2019 - 1:10 pm
Thank you for you ur response Dr. Muthen. Is there an example of this that you can point me to?
 Carlos Sierra posted on Sunday, June 02, 2019 - 4:16 pm
Hi again Dr. Muthen,

I apologize for my extremely ambiguous post. What I meant to ask is as follows: I know how to estimate my model using the BY approach to growth modeling. What I am unsure about is how to go about testing differences between groups using equality restrictions when using freely estimated time points.

In the past when testing differences between multiple groups with specified time points (i.e., without freely estimated time points) I would (1) model my growth model independently for each of my groups, then (2) I would run a multiple group analysis to obtained the unrestricted model and, finally, (3) I would constraint the intercept mean and/or intercept slope across groups to test for measurement invarance using the chi-square difference test.

To re-estate my question, I am unsure how the process I outlined above differs when comparing mean intercepts/slops between groups with trajectories with freely estimated time points.

I hope my question is much more clear now.

 Bengt O. Muthen posted on Monday, June 03, 2019 - 2:51 pm
In line with first testing the measurement model before the structural model, you should first test if equality of time scores fits relative to non-equality. If you can't reject equality, you continue with testing growth factor parameter differences across groups.
 Carlos Sierra posted on Thursday, June 06, 2019 - 7:50 pm
Thank you for your response Dr. Muthen.

I was able to test and establish invariance on my multiple group measurement model (y1@0 y2*.20 y3*.40 y4*.60 y5*.80 y6@1) by constraining time scores/factor loadings and residual covarainces. I have a couple of additional questions:

1. Is invariance of time points/factor loadings and residual covariances enough to claim multigroup invaraince?

2. My goal after establishing time score invariance across groups (male and female) is to run a parallel process model. I have a second set of measures which map in time to my first model (as specified above). The problem is that this second model/process is noninvariant across groups (i.e., the time points, which are freely estimated, differ between males and females). Does this result on my second model renders the parallel process analysis impossible/invalid?

I ask because it would be interesting to examine how these two processes are deferentially related to each other for females and males. I imagine I could run the parallel process but would need to do so independently for males and females (i.e., not through a multiple group analysis) but this would keep me from making group comparisons. Does this sound right to you?

is there a way to do a multiple group parallel process analysis with one group having noninvarince in time points?
 Bengt O. Muthen posted on Friday, June 07, 2019 - 3:10 pm
1. Yes.

2. You can do the parallel analysis but the growth factors won't mean the same thing.
 Carlos Sierra posted on Wednesday, September 25, 2019 - 7:05 pm
Hi Dr. Muthen,

I continue to do some work with multiple group LGM (males vs females). I was able to establish measurement invariance in my model (a continuous variable measured at 6 time points which factor loadings were freely estimated). Now I am moving to test invariance on the slope.

Upon reading on this topic I found two ways of doing this. The first one is to carry out a chi-square test of two model - one where the slope is estimated and a second model where the slope is set equal to 0 in one group and freely estimated in the second group.

The second approach is to, again, estimate the slope in both groups and then constrained them to be equal but, in this method, you do this thorough the use of a label (e.g., (1)). Then you carry out the chi-square difference test.

Which test is method is correct or preferable in your opinion given the characteristics of my model?