Equality constraints in prediction PreviousNext
Mplus Discussion > Growth Modeling of Longitudinal Data >
Message/Author
 Maria Llabre posted on Tuesday, September 09, 2003 - 11:21 am
I am comparing two LGC's obtained for the same sample under two experimental conditions. Initially I tested nested models sequentially constraining the mean intercept & slope, the variances and covariance equal between conditions. Using a chi-squared difference test I found that the trajectories between the two conditions were comparable. I next added a predictor of the intercept and slope. Here is where I got confused. When I maintain all the previous equality constraints between experimental conditions(now the mean constrain becomes an intercept constrain), the results indicate that the predictor influences both intercepts in the same way, and both slopes in the same way. However, if I relax the earlier constrain on the means (leaving equal variances and covariance), the results indicate the predictor influences the slope and intercept for one condition but not the other. I don't quite see how fixing the means(the path from the constant)to be equal would have any influence on the paths from the predictor. Can you explain?
 bmuthen posted on Tuesday, September 09, 2003 - 4:49 pm
If the intercepts in the regressions on the predictor are held equal when in reality they are not, then it would seem possible that the significance of the paths from the predictor can change in this way when relaxing the intercept equality. If you draw a scatterplot of y related to x for each of two data sets with one scatterplot indicating a higher intercept and only a small positive slope and the other a lower intercept and a large positive slope, holding the regression intercepts equal across the data sets could cause the first slope to be larger and perhaps significant.
 thormod idsoe posted on Monday, November 10, 2003 - 6:09 am
It is said that "to establish complete measurement invariance of factors, the intercepts of continuous outcomes or the thresholds of categorical outcomes and the factor loadings should be held equal across time. Deviations from these equalities result in partial measurement invariance". I'm struggling to understand why it makes sense to require complete invariance over time. Loadings are OK - but why does it make sense to have equality-constraints on indicator intercepts? Isn't this to expect no difference in average level (for the whole group) over time? Or is it possible to have mean-differences at the factor-level when constraining indicator-intercepts? As I "feel" there is something here I don't understand, do you know any paper explaining this?
 Linda K. Muthen posted on Monday, November 10, 2003 - 6:40 am
Yes, it is possible to have factor mean differences together with measurement intercept invariance and still have observed variable means changing across time. The intercepts/thresholds represent what in IRT are referred to as difficulty parameters. Invariance of these parameters represents differential item functioning (DIF). If an item or variable behaves differently across levels of a covariate or factor, I think it would be difficult to argue for measurement invariance.
 Anonymous posted on Sunday, November 28, 2004 - 7:27 pm
I am running some growth curves as well as some autoregressive models in MPlus 3.01. The data is poisson distributed so I am running the analyses using MLR and intergration algorithms (type depending upon dimensions of integration). My question is threefold:

1) If I wish to test nested models, for instance fixing autoregressive coefficients as equal versus freely estimating them, how should I utilize the H0 Chi-square statistic in the output? Ordinarily for MLR estimation of non-count data I would use the scaling coefficient to calculate the appropriate chi-square and chi-square test statistic, however I don't detect this item in the output.

2) For comparison of non-nested models, do you have any recomendations (papers or otherwise)regarding the meaningfulness of the size of the differences between the different information criterion measures?

3) In evaluation of the fit of the model implied to the observed covariance matrix and mean vector, is there any way to extract any standard fit measures like the CFI or RMSEA from the MPlus output for poisson data analyses?
 Linda K. Muthen posted on Monday, November 29, 2004 - 8:03 am
1. Use 2 times the loglikelihood difference. This is distributed as chi-square.
2. I think Nagin discusses BIC in a 1999 Psych Methods paper.
3. No.
 Gerald Lackey posted on Wednesday, August 16, 2006 - 12:09 pm
I have a conditional multivariate growth curve model with categorical ordinal outcomes for one of the growth curves and a measurement model with categorical indicators for the other growth curve. I believe that by default the model is identified by standardizing the y*'s of each of these categorical outcomes. I am trying to follow the suggestion raised in Bollen and Curran's (2006) LCM book (page 234) that suggests freeing the mean/variance of these latents and identifying the model by fixing the first two thresholds to 0 and 1. I can easily set these threshold constraints, but I am having trouble freeing the mean/variance. When I simply put it [indicator_name]; I get an error telling me it is ignoring this command. Am I confused about something or is there a way to free the means for these y*'s? Thanks.
 Linda K. Muthen posted on Wednesday, August 16, 2006 - 12:36 pm
It is hard to understand what is happening without seeing your model. Please send your input, data, output, and license number to support@statmodel.com.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: