Comparison of Mplus results with R & ... PreviousNext
Mplus Discussion > Multilevel Data/Complex Sample >
Message/Author
 Mariya Shiyko posted on Monday, September 22, 2008 - 8:32 pm
Hello,
I am working with longitudinal data where observations per individual range from 3 to 30 days. I am conceptualizing my model similar to example 9.16 of your user's guide with a count variable as an outcome. Ultimately, I would like to do GMM (& introduce latent growth classes), but I started by trying to replicate analysis of a mixed-effects model I performed in R with lme4 package).

Below is my syntax:
VARIABLE: ...
CLUSTER = SubjID;
COUNT = allSmo;
WITHIN = SRSday;
BETWEEN = FagerC numQA SEMeanC;
ANALYSIS: TYPE = TWOLEVEL RANDOM;
MODEL: %WITHIN%
sl | allSmo ON SRSday;
%BETWEEN%
allSmo ON FagerC;
sl ON numQA SEMeanC;
allSmo WITH sl;

When I ran this model, the results are very different from those I got in R and HLM (Raudenbush): parameter estimates, although have similar directionality, are different in magnitude; all model fit indices are very different. Also, user's guide says that the intercept is set as random automatically. I am not sure that it was the case in my model, since the # of continuous latent variables are listed as 1 (I would think that both the intercept and the slope are latent continuous). Also, residual variance was estimated only for the slope, not the intercept. What do I need to change? Thank you very much for your attention!
 Bengt O. Muthen posted on Tuesday, September 23, 2008 - 6:18 am
You should get exactly the same results as other programs. Please send your input, output, data, and license number to support@statmodel.com.
 Xu, Man posted on Saturday, October 04, 2008 - 6:53 am
could it be due to rounding errors? i also found the results from Mplus and MLwin are often very slightly different.
 Bengt O. Muthen posted on Saturday, October 04, 2008 - 12:05 pm
Slight differences are often due to slightly different convergence criteria - which can be sharpened to move the programs' results closer to each other.
 Lois Downey posted on Wednesday, July 07, 2010 - 5:16 pm
In my study of repeated measures within patients, I'm having the same difficulty Mariya Shiyko noted in the post of 22 Sep 2008. My results from HLM and Mplus two-level give somewhat different estimates for fixed effects (6.609 vs. 6.357, respectively, for the intercepts; -0.005 and -0.004 for the slopes) and also different estimates for some variance components (5.249 vs. 3.483 for intercepts; both programs showing <0.001 for slope variance; residual variances of 2.249 vs. 2.291). Deviances were also slightly different (8649.572 in HLM, compared with LL of -4321.733 in Mplus).

I also notice that HLM indicates that it is estimating only 4 parameters, whereas Mplus shows 5 free parameters. Is this related to the fact that although HLM estimates residual variance, it does not compute its significance, whereas Mplus does?

You indicated to Mariya that the results from the two programs should match exactly. However, I wonder whether in your investigation of the problem Mariya reported, you discovered some difference between the computation methods used by HLM vs. Mplus that would account for differences such as these.

Thank you.
 Linda K. Muthen posted on Wednesday, July 07, 2010 - 5:27 pm
Differences seen are usually due to using different estimators. Maximum likelihood should be used in both. Also, if the model does not have the same parameters, the results will differ. You would need to see which parameter differs between the two programs and change one of them so that you are estimating the same model. Differences will be seen also if the data are different, for example if there are missing data and one program uses listwise deletion and the other doesn't. Other than that, with the same data, model, and estimator, you will get the same results. If you can't figure it, please send the HLM and Mplus outputs along with your license number to support@statmodel.com.
 Jamie Vaske posted on Monday, August 23, 2010 - 5:15 pm
I am testing an unconditional model with a binary variable. I have noticed that the significance test of the variance component in HLM gives me a different substantive result than the significance test of the variance result in MPLUS. I use the same estimator (ML) and receive similar estimates of the coefficients and standard errors. However, the chi-square test of HLM shows a significant variance estimate (p = .043), while the z-test in MPLUS suggests non-significant variance estimate (p = .146). Is there a reason to use the z distribution (with a binary variable) rather than the chi-square? Thank you for you assistance!
 Linda K. Muthen posted on Tuesday, August 24, 2010 - 8:14 am
z squared should be equal to the chi-square value. If you want further information on the differences you see, send the HLM and Mplus outputs along with your license number to support@statmodel.com.
 Linda K. Muthen posted on Tuesday, August 24, 2010 - 10:04 am
Thanks for sending the outputs. You need to sue maximum likelihood in HLM to compare to Mplus.
 Jae Wan Yang posted on Sunday, June 07, 2015 - 2:48 am
Hello Professor Muthens,
I am analyzing a model with with one level 2 predictor (IV)+several level 1 covariates and level 1 DV in Mplus. The results are quite different from the results with HLM software. Could you please check my code? If you are willing to, I can send the outputs.

USEVARIABLES ARE NNumID Blaugenderindex
Incivil los female D5;

MISSING are All(999);
BETWEEN ARE Blaugenderindex;
CLUSTER IS NNumID;
ANALYSIS: TYPE IS TWOLEVEL;
MODEL:
%BETWEEN%
Blaugenderindex Incivil;
Incivil ON Blaugenderindex(a);
incivil on female;
incivil on los;
incivil on D5;
OUTPUT: TECH1 TECH8 CINTERVAL;
 Linda K. Muthen posted on Sunday, June 07, 2015 - 8:58 am
When a variable measured on the individual level is not put on the WITHIN list, a latent variable decomposition is done in Mplus. See Examples 9.1 and 9.2. To be equivalent to HLM, you need to create cluser-level variables for the variables measured on the individual level and used on between. See the CLUSTER_MEAN option of the DEFINE command. Then put these variables on the BETWEEN list.
 Jae Wan Yang posted on Monday, June 08, 2015 - 8:08 am
Thank you so much! I will follow your advice and see what happens.

Best,
 Jae Wan Yang posted on Friday, June 12, 2015 - 9:45 pm
Dear Professor Muthen,
I followed your suggestion and got this error “*** ERROR in VARIABLE command, TYPE=TWOLEVEL requires specification for the CLUSTER option.” although I specified the cluster option.
Could you please check my code? I wonder if I can use the CLUSTER=MEAN option for categorical variables. All three covariates I used for the CLUSTER_MEAN option were categorical dummy variables (female, los, d5). Do you think this was a problem?

Define:
Bfemale = CLUSTER_MEAN (female);
BLos = CLUSTER_MEAN (los);
BD5 = CLUSTER_MEAN (d5);
USEVARIABLES ARE NNumID Blaugenderindex
Incivil BLos Bfemale BD5;

MISSING are All(999);
BETWEEN ARE Blaugenderindex
BLos Bfemale BD5;
CLUSTER IS NNumID;
ANALYSIS: TYPE IS TWOLEVEL;
MODEL:
%BETWEEN%
Blaugenderindex Incivil;
Incivil ON Blaugenderindex;
incivil on Bfemale;
incivil on BLos;
incivil on BD5;
OUTPUT: TECH1 TECH8 CINTERVAL;
Thank you so much!
 Jae Wan Yang posted on Friday, June 12, 2015 - 9:50 pm
Oh by the way, Blaugenderindex is a level-2 observed variable. It was observed at level-2. Thanks.
 Linda K. Muthen posted on Saturday, June 13, 2015 - 6:14 am
Please send the output and your license number to support@statmodel.com.
 LING, Chuding posted on Thursday, July 27, 2017 - 12:01 am
I searched the archives of this discussion forum and found this post. For those who are interested in the comparison of the results from different programs, you may have a look at the following document:
https://stat.utexas.edu/images/SSC/documents/SoftwareTutorials/MultilevelModeling.pdf
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: