Different log-likelihoods for equival...
Message/Author
 Joseph Coveney posted on Monday, July 14, 2008 - 10:00 am
I'm using a random intercept linear model as an exercise to learn the syntax of Mplus. A data file is generated using MONTECARLO. I've come up with five approaches for the model:

#1 a latent variable with response variables (y1-yn) as indicators that are also regressed on x1-xn

#2 TWOLEVEL with a DATA WIDETOLONG followed by %WITHIN% y ON x;

#3 constrains the response variables' covariance to be equal

#4 uses i BY y1-yn@1 syntax

#5 is also growth model syntax--TSCORES and AT, and then constrain the variance of the random slope to zero (s@0;).

All have various constraints to yield four free parameters: intercept and slope fixed effects, and the two variance components. ML is specified throughout.

All give identical fixed effects and residual variance, but there are differences in log likelihood (LL) and random effect variance:

Method -LL (H0) Variance
1         1823       0.916
2           969       0.917
3         1823       0.916
4         1823       0.916
5           969       0.916

I can e-mail the input files privately.

A. Why do the LLs fall into two camps? The glib answer isn't gratifying.

B. What leads Method 2's variance to round to 0.917?
 Bengt O. Muthen posted on Monday, July 14, 2008 - 11:04 am
A. The methods differ in whether or not the likelihood is based on the conditional distribution [y | x] or the joint distribution [y, x]. Methods 1, 3, and 4, use the latter approach typical for SEM, whereas the other methods use the former approach which is more generally applicable (the choice makes a difference in some modeling).

B. If you sharpen the convergence criterion you probably get 0.916 also here.
 Joseph Coveney posted on Monday, July 14, 2008 - 6:32 pm
Thank you. You specify the conditional distribution when you make ANALYSIS: TYPE = RANDOM;.

Can you recommend an entry point into the literature about this, especially, for gaining insight as to when the choice makes a difference in practice?
 Bengt O. Muthen posted on Tuesday, July 15, 2008 - 9:53 am
When all y's are continuous, normal-theory ML gives the same results whether the conditional or joint approach is taken - this is shown in a JASA article by Joreskog & Goldberger (1975?). When y's are categorical, the conditional approach makes less strong assumptions as argued in the 1984 Psychometrika article by me. In general, statistical writing seems to go with the conditional approach because then you make distributional assumptions for residuals rather than the whole y distribution. Why make a distributional assumption for [x] when you don't have to?
 Bengt O. Muthen posted on Tuesday, July 15, 2008 - 10:13 am
A good example of making distributional assumptions for the residuals, i.e. taking a conditional [y | x] approach, is the whole multilevel literature.
 Joseph Coveney posted on Tuesday, July 15, 2008 - 4:53 pm
Thank you very much. You've been very helpful.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: