I am working on data analysis for my dissertation. Multilevel regression is used for data analysis. While working on data analysis using Mplus, I got this error message. I tried to solve this problem by increasing start value but it did not work. Would you mind if I ask your advice to solve this problem? I look forward to hearing from you. Below is output that I had. Thank you in advance for your help.
Cluster variable ID Between variables ACAR_M SMAG_M COMP_M TINSB_M TMG_M TFOS_M SUMSDS
Estimator MLR Information matrix OBSERVED Optimization Specifications for the Quasi-Newton Algorithm for Continuous Outcomes Maximum number of iterations 1000 Convergence criterion 0.100D-05 Optimization Specifications for the EM Algorithm Maximum number of iterations 500 Convergence criteria Loglikelihood change 0.100D-02 Relative loglikelihood change 0.100D-05 Derivative 0.100D-02 Optimization Specifications for the M step of the EM Algorithm for Categorical Latent variables Number of M step iterations 1 M step convergence criterion 0.100D-02 Basis for M step termination ITERATION Optimization Specifications for the M step of the EM Algorithm for Censored, Binary or Ordered Categorical (Ordinal), Unordered Categorical (Nominal) and Count Outcomes Number of M step iterations 1 M step convergence criterion 0.100D-02 Basis for M step termination ITERATION Maximum value for logit thresholds 15 Minimum value for logit thresholds -15 Minimum expected cell size for chi-square 0.100D-01 Optimization algorithm EMA Integration Specifications Type STANDARD Number of integration points 15 Dimensions of numerical integration 1 Adaptive quadrature ON Progressive quadrature stages 1 Cholesky ON
Input data file(s) Study2.Mplus.Addmean.031706.dat Input data format FREE
THE MODEL ESTIMATION DID NOT TERMINATE NORMALLY DUE TO A NON-ZERO DERIVATIVE OF THE OBSERVED-DATA LOGLIKELIHOOD.
THE MCONVERGENCE CRITERION OF THE EM ALGORITHM IS NOT FULFILLED. CHECK YOUR STARTING VALUES OR INCREASE THE NUMBER OF MITERATIONS. ESTIMATES CANNOT BE TRUSTED. THE LOGLIKELIHOOD DERIVATIVE FOR PARAMETER 7 IS -0.18718467D+02.
Thank you so much for your kind reply. I am sorry to post output on this board. I simply thought that it might be better to post it for you to understand my problem. I tried to delet it, but I could not do it. Sorry again.
I know it would be silly question. But I am not good at Mplus, could you please let me know how I can try to icrease the number of MITERATIONS? If there is any syntax for it, please let me know. I hope that this question does not bother you.
Look up MITERATIONS in the Mplus User's Guide. Choose a number larger than the default value. As I said earlier, if you have further problems of this type, you need to contact firstname.lastname@example.org and provide the information I asked for.
Reliability correction in regression is possible using:
f BY x@1; x@a; y on f;
a: err var of x; a=(1-rel)*var
My aim is to use this in ML regression with Rasch estimates as IVs on both levels. Reliability is calculated using SEs from IRT software. I'd like to apply this with A) latent decomposition of covariates and B) observed grp means and tried:
A) %within% f by x@1; email@example.com; y on f;
%between% y on x;
x grand centered
Output: "this variable will be treated as a y-variable on both levels: x"
B) between = xb;
%within% f by xw@1; firstname.lastname@example.org; y on f;
%between% y on xb;
xw group centered, xb group mn
Comparing results of A & B with regression on LVs (2lvl Rasch), there's much higher conformity than without correction.
A) How is the variance decomposed between levels? Is it ok to have M+ decompose the variance or should I use observed group means?
B) With group centering, I still use email@example.com for correction. Should I use within variance instead of total? Must reliability of grp means be taken into account (grp sizes~20)?
I read the web note but still need some clarification.
1) I understand that if I use latent covariates the predictor is a LV and has the "within-between status" (web note). But in the model I described (model A) regression is being done on f on lvl1 and x on lvl2. f is defined by x but is not x actually. I thought about using f as predictor on both levels but this doesn't work. I'm not sure if my model specification is correct since I use the same predictor corrected for unreliability on within (f) and not corrected on between (x).
2) In Model B I use regular group centering so predictor variance on within should be variance of deviation scores. I wonder if for the calculation of error variance I may use then: (1 - reliability)*within variance, with reliability calculated from IRT SEs.
I think your Model A approach is most straightforward. I would think that you get very similar results to Model A if instead you declare Within = x and use an observed between-level, centered "xb" on Between. That is similar to your B approach, although you would have to declare Within = xw and not do group centering.
Yan Liu posted on Wednesday, November 28, 2012 - 3:28 pm
Dear Dr. Muthen,
I am working on multilevel regression analysis with random intercept only for continuous outcome variable. I have two questions.
(1) In the output, I see the variance-covariance(correlation) matrix is provided at both within- and between levels. How are the var-cov matrices computed? Are they computed like those in multilevel SEM, which are additive?
(2) Does Mplus use pseudo-maximum likelihood for multilevel regression analysis?
I ran a 2-level regression with dichotomous variable as outcome. This is an intercept-only model with all the level-2 slopes fixed. Sample size is around 2,000. MPlus automatically used MLR as the estimator.
I am mostly interested in whether a level-1 predictor is significant or not. The results of the Wald test are reasonable. This predictor is significant in about 10 out of 30 cases, which agrees with the substantive knowledge. However, if I use the -2Loglikelihood difference test, this predictor is significant for all the 30 cases. I did use the scaling factor for calculating the scaled Chi-square difference. While the difference of the degree of freedom is only 1, the difference of -2 loglikelihood ratio drops by at least 200 when this particular predictor is added to the model.
I always thought Wald test and log ratio test should produce more or less equivalent results with large sample size. However, I am very confused by the drastically different results in this case. Have you ever heard of or experienced such thing? What could have gone wrong in your view? Many thanks for your comments. I really appreciate it. Hongli