Anonymous posted on Thursday, September 30, 2004 - 5:51 am
I do have a question concerning the starting values: Which starting values should be used? Is it possible just to try out different starting values like 0.5, 1.0 or -1.0 and see which starting values generate the best cfi, tli, etc? Could you explain what changes when I use different starting values? Please excuse the very trivial question, but I'm new to MPlus.
I am doing a path analysis. The IDVs are 3 observed continuous variables. The DVs are 3 latent continous variables, which are factor 1 by a b c; factor 2 by d e f;
The output told me that the data was not converged and that the problem was found in the covariance of the two latent Vs. I found that in the variance covariance matrix, the pattern of covariance looked like this:
I am also getting a "NO CONVERGENCE. NUMBER OF ITERATIONS EXCEEDED." I have increased the # of iterations to 10000 and am unsure as to determine which starting values to assign to which variables. Please advise.
Also, should I be concerned that the estimates for f2 are substantially higher than the estimates for f1?
Jungeun Lee posted on Monday, December 29, 2008 - 2:11 pm
I am estimating a CFA and am getting ' NO CONVERGENCE. NUMBER OF ITERATIONS EXCEEDED.'warning. I increased # of iterations=10000 but still got the same message... Any advice will be deeply appreciated. Thanks!
I also met this problem. I got the message 'NO CONVERGENCE. NUMBER OF ITERATIONS EXCEEDED' even I increased the number of iteration to 10000. I knew the range of my sample variance values is too large, but I don't know how to revise it. Thank you.
I have a sample of performance measures and I use "age" as the time variable. When I specify estimator=ML, the model is estimable from 20-38. If I use estimator=MLR, the model only converges until 20-33. The covariance coverages decreases if I increase the number of "waves" (age) in the analysis due to drop-outs. What are the minimum criteria for MLR (as opposed to ML)?
ML and MLR should behave the same. For further comments, send the relevant outputs and your license number to firstname.lastname@example.org.
Anna Potocki posted on Thursday, September 22, 2011 - 2:24 am
Hi, I am also getting this message "NO CONVERGENCE. NUMBER OF ITERATIONS EXCEEDED.FACTOR SCORES WILL NOT BE COMPUTED DUE TO NONCONVERGENCE OR NONIDENTIFIED MODEL." I am wondering if there is any possibility to avoid this message and to run the model or if the problem relies on my model itself.. I'm sorry, I'm really new in Mplus. Could you help me? Thank you!
No, this applies to continuous variables. You should not rescale categorical variables.
Cory Dennis posted on Thursday, November 17, 2011 - 11:04 am
So I get a "NO CONVERGENCE. NUMBER OF ITERATIONS EXCEEDED" message when running my model. The problem appears to be between one of the LV with continuous outcomes and one of the LV with ordinal(using ULSMV). When rescaling the continuous indicators my model converges.
On the other hand, I am able to get results with multiple imputed data sets using the original scale (with a warning on two of data sets regarding theta). Should I rescale the continuous variables?
Hello, I am running a model with 4 dimensions of integration and cannot seem to get it to converge. I have latent-observed interactions in the model, so I am using Type=Random and Integration=Montecarlo. Following the advice in the handbook I increased integration points to 1000 and MIterations to 2500 but still get the error below. Does it make sense to increase iterations further or is there something else I should consider first? Thank you very much for your time. Jan
THE MODEL ESTIMATION DID NOT TERMINATE NORMALLY DUE TO A NON-ZERO DERIVATIVE OF THE OBSERVED-DATA LOGLIKELIHOOD.
THE MCONVERGENCE CRITERION OF THE EM ALGORITHM IS NOT FULFILLED. CHECK YOUR STARTING VALUES OR INCREASE THE NUMBER OF MITERATIONS. ESTIMATES CANNOT BE TRUSTED. THE LOGLIKELIHOOD DERIVATIVE FOR PARAMETER 44 IS -0.22400526D-01.
I am having an issue with a double mediation model causing a convergence problem. I get the following error when running the full model :
THE MODEL ESTIMATION TERMINATED NORMALLY
THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE COMPUTED. THE MODEL MAY NOT BE IDENTIFIED. CHECK YOUR MODEL. PROBLEM INVOLVING PARAMETER 23.
THE CONDITION NUMBER IS 0.659D-14.
When I run a simplified mediation model without one of the variables I get no error and decent fit, however, once I add that variable as an additional mediator, I get the error.
The scales used for the variables scales are not very different, the basic descriptives seem to be okay (all values within the response range, means, SDs and correlations all make sense) and the range of variance doesn’t appear to be too large. I also tried increasing the number of iterations to 10000 and the error persists. What do you think is the source of the problem and how would you advise to proceed?
I am trying to fit a latent variable structural equation model. In the Mplus output I see an error message saying, "NO CONVERGENCE. NUMBER OF ITERATIONS EXCEEDED." Can you please tell me what does this error message implies? How can I fix it? Thank you.
I am running a 3-level model using TWOLEVEL COMPLEX. Where students are my level 1, classrooms level 2, and teachers, level 3.
I am trying to confirm that the small number of students and students/classroom in some of my subgroups makes it impossible to do a multi-group analysis.
I am at the phase where I am trying to get my dependent and independent measurement models to converge on my subgroups separately (they work fine for the entire sample) and wanted to double check that the error messages I'm getting are consistent with having too few students or too few students/classroom.
My error messages include:
THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE COMPUTED. THIS IS OFTEN DUE TO THE STARTING VALUES BUT MAY ALSO BE AN INDICATION OF MODEL NONIDENTIFICATION. CHANGE YOUR MODEL AND/OR STARTING VALUES. PROBLEM INVOLVING PARAMETER 15.
THE MODEL ESTIMATION DID NOT TERMINATE NORMALLY DUE TO AN ILL-CONDITIONED FISHER INFORMATION MATRIX. CHANGE YOUR MODEL AND/OR STARTING VALUES.
I only have approximately 100 to 153 students in each of three groups of 17-25 classrooms. And, as you might expect, mean class size is pretty small.
Could these error messages be due to small sample sizes and/or the small number of students/classroom?
Cecily Na posted on Tuesday, July 17, 2012 - 10:04 am
Hello Professors, I have done a CFA model with more than 10 factors. The fit is not great, but can be considered. I then added the structural part of the model, i.e. come causal links. The program complained about no convergence. I tried reducing variances of continuous variables to the range between 1 and 10, but that didn't work out. Can you help with it? Thanks a lot!
lopisok posted on Monday, April 08, 2013 - 4:12 am
I'm trying to create a full causal SEM model. It converges easily when I use ML as an estimator but when I use MLM as an estimator it doesn't converge (maximum number of iterations ...). Is there a reason why computation time increases so dramatically when using MLM as an estimator and why it suddenly doesn't converge? I tried increasing the number of iterations and I tried putting the variance of other items (who show large estimates) on 1 but it doesn't resolve the problem. I read in the manual some suggestions about convergence problems as changing the starting values but I'm not sure in what I'm supposed to change them.
I'm running a SEM with 10 latent variables and several single indicators and I can't seem to get it to converge despite changing the setting for the number of iterations and convergence. Any suggestions?
2 other questions:
1)If I want to make sure the endogenous variables are co-varying, do I have to specify each relationship in the syntax or is this accomplished by default in the program?
2) If I want to allow the error terms (psi) on the endogenous variables to co-vary, is this just a matter of specifying one variable WITH another?
I posted some time ago that I was trying to create a full causal SEM model. It converges easily when I use ML as an estimator but when I use MLM as an estimator it doesn't converge (maximum number of iterations ...). I asked if there was a reason why computation time increases so dramatically when using MLM as an estimator and why it doesn't converge?
You stated that this is probably related to: "MLM using listwise deletion and ML using a FIML approach to missing data. This makes the data different for the two analyses."
Is there a way to make the model converge using MLM? And is it normal that the computation time is so dramatically different? I tried increasing the number of iterations and I tried putting the variance of other items (ones that show large estimates) on 1 but it doesn't resolve the problem. I read in the manual some suggestions about convergence problems through changing the starting values but I'm not sure in what I'm supposed to change them.
I freed the first factor indicator of one factor which created problem and fixed that factors variance to one. This worked. Thank you very much! Do you know any sources where I could find more background info why this suddenly works and before it would not converge? Why does fixing the variance of the factor to 1 and freeing the first indicator makes a difference in the computation?
The first factor loading is probably being estimated at a value that is not close to the value of one that it was being fixed at. Freeing it allows you to see this. If you want to set the metric by fixing a factor loading to one, choose a factor indicator that is estimated close to one.