Message/Author 


I'm using a random intercept linear model as an exercise to learn the syntax of Mplus. A data file is generated using MONTECARLO. I've come up with five approaches for the model: #1 a latent variable with response variables (y1yn) as indicators that are also regressed on x1xn #2 TWOLEVEL with a DATA WIDETOLONG followed by %WITHIN% y ON x; #3 constrains the response variables' covariance to be equal #4 uses i BY y1yn@1 syntax #5 is also growth model syntaxTSCORES and AT, and then constrain the variance of the random slope to zero (s@0;). All have various constraints to yield four free parameters: intercept and slope fixed effects, and the two variance components. ML is specified throughout. All give identical fixed effects and residual variance, but there are differences in log likelihood (LL) and random effect variance: Method LL (H0) Variance 1 1823 0.916 2 969 0.917 3 1823 0.916 4 1823 0.916 5 969 0.916 I can email the input files privately. A. Why do the LLs fall into two camps? The glib answer isn't gratifying. B. What leads Method 2's variance to round to 0.917? 


A. The methods differ in whether or not the likelihood is based on the conditional distribution [y  x] or the joint distribution [y, x]. Methods 1, 3, and 4, use the latter approach typical for SEM, whereas the other methods use the former approach which is more generally applicable (the choice makes a difference in some modeling). B. If you sharpen the convergence criterion you probably get 0.916 also here. 


Thank you. You specify the conditional distribution when you make ANALYSIS: TYPE = RANDOM;. Can you recommend an entry point into the literature about this, especially, for gaining insight as to when the choice makes a difference in practice? 


When all y's are continuous, normaltheory ML gives the same results whether the conditional or joint approach is taken  this is shown in a JASA article by Joreskog & Goldberger (1975?). When y's are categorical, the conditional approach makes less strong assumptions as argued in the 1984 Psychometrika article by me. In general, statistical writing seems to go with the conditional approach because then you make distributional assumptions for residuals rather than the whole y distribution. Why make a distributional assumption for [x] when you don't have to? 


A good example of making distributional assumptions for the residuals, i.e. taking a conditional [y  x] approach, is the whole multilevel literature. 


Thank you very much. You've been very helpful. 

Back to top 