Anonymous posted on Sunday, November 23, 2003 - 1:27 pm
I have 5 repeated measures (rep1, rep2, rep3, rep4, and rep5). I like to model compound symmetry and auto regressive error structure respectively. I tried to set up a commend file for myself but Mplus gave me error messages.
I looked for setting error structure at the Growth Modeling section of Mplus manual and the addendum to the Mplus user's guide I can't find one.
How do I set a command file ? Could you give me an example in MODEL commends ?
Hi Linda, I know that Mplus doesn't currently allow chi-square difference testing when there are model constraints in place. However, I was wanting to test for the difference between a model with no covariance among errors and a first-order auto-correlated structure. Can you think of any way to "jury-rig" Mplus to allow such a test?
I think what you are saying is that when you use MODEL CONSTRAINT you do not get chi-square. You do, however, get loglikelihood values and can use -2 times the difference in loglikelihoods to test nested models. This difference value is a chi-square value.
True, however, perhaps I should give more info: I am running a model with an estimator robust to non-normality (why I am using the "difftest" option). Will this estimator bias the difference of the -2LL values just as with chi-square? If so, is there a quick way around this?
With weighted least squares, there is no loglikelihood. I am not sure if you are talking hypothetically or if you are actually getting a loglikelihood. If you are getting one, then check the summary of your analysis to see which estimator you are using. If this does not help, please send your output to firstname.lastname@example.org so I can see exactly what you are doing.
Does increased model fit due to the specification of a correlated error structure (autocorrelation) and/ or invariant residual variances (homoscedasticity) indicate that the errors actually do exhibit autocorrelation and/ or homoscedasticity? Or is it possible that the increased model fit be spurious due to misspecification of the underlying residual covariance structure?
Thank you very much for your prompt reply. So, do you suggest to argue theoretically which error covariance structure is most appropriate to assume?
I ask you that question because I read in the paper of Willett, J. B. (2004). Investigating individual change and development: The multilevel model for change and the method of latent growth modeling. Researuch in Human Development, I(1/2), 32-57 on page 44:
"In fact, in any analysis of change it is good practice to try out several multilevel models with alternative error covariance structures and compare them, using standard CSA indices of model fit, in order to figure out which Level 1 error covariance structure is most appropriate for the research problem."
I interpreted this quotation as to compare models with different assumptions about the error covariance structure and selecting the one with the best fit. So if increased model fit might be spurious, I run the risk of choosing a wrongly specified model?
I think one can never know if a model is correct even if it fits well and better than other models. This is the equivalent model dilemma. You just have to think of alternative models and among those that fit well see how different the results are. Aspects of models can of course be tested, such as the need for correlated errors, but you never know if a totally different model has generated the data. The real danger arises when you are forced to do a long sequence of model improvement, risking capitalizing on chance; see, e.g.:
McCallum, R.C., Roznowski, M., & Necowitz, L.B. (1992). Model modifications in covariance structure analysis: The problem of capitalizing on chance. Psychological Bulletin, 111, 490-504.