Message/Author |
|
Anonymous posted on Sunday, November 23, 2003 - 1:27 pm
|
|
|
I have 5 repeated measures (rep1, rep2, rep3, rep4, and rep5). I like to model compound symmetry and auto regressive error structure respectively. I tried to set up a commend file for myself but Mplus gave me error messages. I looked for setting error structure at the Growth Modeling section of Mplus manual and the addendum to the Mplus user's guide I can't find one. How do I set a command file ? Could you give me an example in MODEL commends ? Thanks |
|
|
Compound symmetry is an intercept only model. The MODEL command would be: MODEL: i BY rep1-rep5@1; rep1-rep5 (1); ! hold residual variances equal We would model autoregression as adjacent residual covariances: rep1-rep4 PWITH rep2-rep5 (2); |
|
Anonymous posted on Sunday, January 25, 2004 - 12:30 am
|
|
|
How can I model an unstructured error? I typed like this; MODEL : rep1-rep5; rep1 WITH rep2; rep1 WITH rep3; rep1 WITH rep4; rep1 WITH rep5; rep2 WITH rep3; rep2 WITH rep4; rep2 WITH rep5; rep3 WITH rep4; rep3 WITH rep5; rep4 WITH rep5; : It doesn't seem right. Mplus didn't give any S.E values. Could you help me to set up a proper command for this ? Thanks |
|
|
You need to send the full output to support@statmodel.com so I can see why the model is not converging. |
|
|
Hi Linda, I know that Mplus doesn't currently allow chi-square difference testing when there are model constraints in place. However, I was wanting to test for the difference between a model with no covariance among errors and a first-order auto-correlated structure. Can you think of any way to "jury-rig" Mplus to allow such a test? Thanks for your time! mike |
|
|
I think what you are saying is that when you use MODEL CONSTRAINT you do not get chi-square. You do, however, get loglikelihood values and can use -2 times the difference in loglikelihoods to test nested models. This difference value is a chi-square value. |
|
|
True, however, perhaps I should give more info: I am running a model with an estimator robust to non-normality (why I am using the "difftest" option). Will this estimator bias the difference of the -2LL values just as with chi-square? If so, is there a quick way around this? Thanks again! |
|
|
With weighted least squares, there is no loglikelihood. I am not sure if you are talking hypothetically or if you are actually getting a loglikelihood. If you are getting one, then check the summary of your analysis to see which estimator you are using. If this does not help, please send your output to support@statmodel.com so I can see exactly what you are doing. |
|
|
Does increased model fit due to the specification of a correlated error structure (autocorrelation) and/ or invariant residual variances (homoscedasticity) indicate that the errors actually do exhibit autocorrelation and/ or homoscedasticity? Or is it possible that the increased model fit be spurious due to misspecification of the underlying residual covariance structure? Thank you already in advance. |
|
|
It seems possible that the fit improvement may be spurious as you say. |
|
|
Thank you very much for your prompt reply. So, do you suggest to argue theoretically which error covariance structure is most appropriate to assume? I ask you that question because I read in the paper of Willett, J. B. (2004). Investigating individual change and development: The multilevel model for change and the method of latent growth modeling. Researuch in Human Development, I(1/2), 32-57 on page 44: "In fact, in any analysis of change it is good practice to try out several multilevel models with alternative error covariance structures and compare them, using standard CSA indices of model fit, in order to figure out which Level 1 error covariance structure is most appropriate for the research problem." I interpreted this quotation as to compare models with different assumptions about the error covariance structure and selecting the one with the best fit. So if increased model fit might be spurious, I run the risk of choosing a wrongly specified model? |
|
|
I think one can never know if a model is correct even if it fits well and better than other models. This is the equivalent model dilemma. You just have to think of alternative models and among those that fit well see how different the results are. Aspects of models can of course be tested, such as the need for correlated errors, but you never know if a totally different model has generated the data. The real danger arises when you are forced to do a long sequence of model improvement, risking capitalizing on chance; see, e.g.: McCallum, R.C., Roznowski, M., & Necowitz, L.B. (1992). Model modifications in covariance structure analysis: The problem of capitalizing on chance. Psychological Bulletin, 111, 490-504. |
|
|
Thank you so much! That is really helpful to me! |
|
|
I have a follow-up question: Would analyzing a hold-up sample be a good approach to this problem? |
|
|
excuse me, but I meant hold-out sample, where you analyse several randomly selected samples (50-75%) of the entire sample. |
|
|
It's always a good idea to cross-validate your results if you have a large enough sample to do so. |
|
|
Hello, What is the default covariance structure used in multilevel models (fixed effects) such as: ANALYSIS: TYPE = COMPLEX TWOLEVEL; ESTIMATOR IS ML; ITERATIONS = 10000; CONVERGENCE = 0.00005; COVERAGE = 0.10; MODEL: %within% yanti_OT ON wave0 Qwave; yanti_OT ON c_fanti_OT; yanti_OT ON c_komanti_OT; %between% yanti_OT; How do I modify the default covariance structure to compare it with the unstructured, compound symmetry, autoregressive etc.? |
|
|
Compound symmetry due to the random intercept. |
|
Back to top |