Anonymous posted on Thursday, September 30, 2004 - 5:51 am
I do have a question concerning the starting values: Which starting values should be used? Is it possible just to try out different starting values like 0.5, 1.0 or -1.0 and see which starting values generate the best cfi, tli, etc? Could you explain what changes when I use different starting values? Please excuse the very trivial question, but I'm new to MPlus.
I am doing a path analysis. The IDVs are 3 observed continuous variables. The DVs are 3 latent continous variables, which are factor 1 by a b c; factor 2 by d e f;
The output told me that the data was not converged and that the problem was found in the covariance of the two latent Vs. I found that in the variance covariance matrix, the pattern of covariance looked like this:
I am also getting a "NO CONVERGENCE. NUMBER OF ITERATIONS EXCEEDED." I have increased the # of iterations to 10000 and am unsure as to determine which starting values to assign to which variables. Please advise.
Also, should I be concerned that the estimates for f2 are substantially higher than the estimates for f1?
Jungeun Lee posted on Monday, December 29, 2008 - 2:11 pm
I am estimating a CFA and am getting ' NO CONVERGENCE. NUMBER OF ITERATIONS EXCEEDED.'warning. I increased # of iterations=10000 but still got the same message... Any advice will be deeply appreciated. Thanks!
I also met this problem. I got the message 'NO CONVERGENCE. NUMBER OF ITERATIONS EXCEEDED' even I increased the number of iteration to 10000. I knew the range of my sample variance values is too large, but I don't know how to revise it. Thank you.
I have a sample of performance measures and I use "age" as the time variable. When I specify estimator=ML, the model is estimable from 20-38. If I use estimator=MLR, the model only converges until 20-33. The covariance coverages decreases if I increase the number of "waves" (age) in the analysis due to drop-outs. What are the minimum criteria for MLR (as opposed to ML)?
ML and MLR should behave the same. For further comments, send the relevant outputs and your license number to firstname.lastname@example.org.
Anna Potocki posted on Thursday, September 22, 2011 - 2:24 am
Hi, I am also getting this message "NO CONVERGENCE. NUMBER OF ITERATIONS EXCEEDED.FACTOR SCORES WILL NOT BE COMPUTED DUE TO NONCONVERGENCE OR NONIDENTIFIED MODEL." I am wondering if there is any possibility to avoid this message and to run the model or if the problem relies on my model itself.. I'm sorry, I'm really new in Mplus. Could you help me? Thank you!
No, this applies to continuous variables. You should not rescale categorical variables.
Cory Dennis posted on Thursday, November 17, 2011 - 11:04 am
So I get a "NO CONVERGENCE. NUMBER OF ITERATIONS EXCEEDED" message when running my model. The problem appears to be between one of the LV with continuous outcomes and one of the LV with ordinal(using ULSMV). When rescaling the continuous indicators my model converges.
On the other hand, I am able to get results with multiple imputed data sets using the original scale (with a warning on two of data sets regarding theta). Should I rescale the continuous variables?
Hello, I am running a model with 4 dimensions of integration and cannot seem to get it to converge. I have latent-observed interactions in the model, so I am using Type=Random and Integration=Montecarlo. Following the advice in the handbook I increased integration points to 1000 and MIterations to 2500 but still get the error below. Does it make sense to increase iterations further or is there something else I should consider first? Thank you very much for your time. Jan
THE MODEL ESTIMATION DID NOT TERMINATE NORMALLY DUE TO A NON-ZERO DERIVATIVE OF THE OBSERVED-DATA LOGLIKELIHOOD.
THE MCONVERGENCE CRITERION OF THE EM ALGORITHM IS NOT FULFILLED. CHECK YOUR STARTING VALUES OR INCREASE THE NUMBER OF MITERATIONS. ESTIMATES CANNOT BE TRUSTED. THE LOGLIKELIHOOD DERIVATIVE FOR PARAMETER 44 IS -0.22400526D-01.
I am having an issue with a double mediation model causing a convergence problem. I get the following error when running the full model :
THE MODEL ESTIMATION TERMINATED NORMALLY
THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE COMPUTED. THE MODEL MAY NOT BE IDENTIFIED. CHECK YOUR MODEL. PROBLEM INVOLVING PARAMETER 23.
THE CONDITION NUMBER IS 0.659D-14.
When I run a simplified mediation model without one of the variables I get no error and decent fit, however, once I add that variable as an additional mediator, I get the error.
The scales used for the variables scales are not very different, the basic descriptives seem to be okay (all values within the response range, means, SDs and correlations all make sense) and the range of variance doesn’t appear to be too large. I also tried increasing the number of iterations to 10000 and the error persists. What do you think is the source of the problem and how would you advise to proceed?
I am trying to fit a latent variable structural equation model. In the Mplus output I see an error message saying, "NO CONVERGENCE. NUMBER OF ITERATIONS EXCEEDED." Can you please tell me what does this error message implies? How can I fix it? Thank you.
I am running a 3-level model using TWOLEVEL COMPLEX. Where students are my level 1, classrooms level 2, and teachers, level 3.
I am trying to confirm that the small number of students and students/classroom in some of my subgroups makes it impossible to do a multi-group analysis.
I am at the phase where I am trying to get my dependent and independent measurement models to converge on my subgroups separately (they work fine for the entire sample) and wanted to double check that the error messages I'm getting are consistent with having too few students or too few students/classroom.
My error messages include:
THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES COULD NOT BE COMPUTED. THIS IS OFTEN DUE TO THE STARTING VALUES BUT MAY ALSO BE AN INDICATION OF MODEL NONIDENTIFICATION. CHANGE YOUR MODEL AND/OR STARTING VALUES. PROBLEM INVOLVING PARAMETER 15.
THE MODEL ESTIMATION DID NOT TERMINATE NORMALLY DUE TO AN ILL-CONDITIONED FISHER INFORMATION MATRIX. CHANGE YOUR MODEL AND/OR STARTING VALUES.
I only have approximately 100 to 153 students in each of three groups of 17-25 classrooms. And, as you might expect, mean class size is pretty small.
Could these error messages be due to small sample sizes and/or the small number of students/classroom?
Cecily Na posted on Tuesday, July 17, 2012 - 10:04 am
Hello Professors, I have done a CFA model with more than 10 factors. The fit is not great, but can be considered. I then added the structural part of the model, i.e. come causal links. The program complained about no convergence. I tried reducing variances of continuous variables to the range between 1 and 10, but that didn't work out. Can you help with it? Thanks a lot!
lopisok posted on Monday, April 08, 2013 - 4:12 am
I'm trying to create a full causal SEM model. It converges easily when I use ML as an estimator but when I use MLM as an estimator it doesn't converge (maximum number of iterations ...). Is there a reason why computation time increases so dramatically when using MLM as an estimator and why it suddenly doesn't converge? I tried increasing the number of iterations and I tried putting the variance of other items (who show large estimates) on 1 but it doesn't resolve the problem. I read in the manual some suggestions about convergence problems as changing the starting values but I'm not sure in what I'm supposed to change them.
I'm running a SEM with 10 latent variables and several single indicators and I can't seem to get it to converge despite changing the setting for the number of iterations and convergence. Any suggestions?
2 other questions:
1)If I want to make sure the endogenous variables are co-varying, do I have to specify each relationship in the syntax or is this accomplished by default in the program?
2) If I want to allow the error terms (psi) on the endogenous variables to co-vary, is this just a matter of specifying one variable WITH another?
I posted some time ago that I was trying to create a full causal SEM model. It converges easily when I use ML as an estimator but when I use MLM as an estimator it doesn't converge (maximum number of iterations ...). I asked if there was a reason why computation time increases so dramatically when using MLM as an estimator and why it doesn't converge?
You stated that this is probably related to: "MLM using listwise deletion and ML using a FIML approach to missing data. This makes the data different for the two analyses."
Is there a way to make the model converge using MLM? And is it normal that the computation time is so dramatically different? I tried increasing the number of iterations and I tried putting the variance of other items (ones that show large estimates) on 1 but it doesn't resolve the problem. I read in the manual some suggestions about convergence problems through changing the starting values but I'm not sure in what I'm supposed to change them.
I freed the first factor indicator of one factor which created problem and fixed that factors variance to one. This worked. Thank you very much! Do you know any sources where I could find more background info why this suddenly works and before it would not converge? Why does fixing the variance of the factor to 1 and freeing the first indicator makes a difference in the computation?
The first factor loading is probably being estimated at a value that is not close to the value of one that it was being fixed at. Freeing it allows you to see this. If you want to set the metric by fixing a factor loading to one, choose a factor indicator that is estimated close to one.
Elina Dale posted on Wednesday, September 04, 2013 - 2:18 am
Dear Dr. Muthen,
As others who posted on this board, I got the message "Number of iterations exceeded."
In MPlus Guide, I see that convergence problems occur often when the range of sample variance exceeds 1 to 10, which happens with combinations of cont and categ outcomes. I believe this is my case.
I have tried, as advised in the Guide, to increase the number of iterations (STITERATIONS=100) but it doesn't improve the situation and I get the same message as before.
Could you please, advice on what can be done as the next step?
The Guide also advises to use the preliminary parameter estimates as starting values, but I do not know how to get them and use them. If you think that could help, could you please, help me with correct commands?
RuoShui posted on Tuesday, November 26, 2013 - 5:32 pm
Dear Dr. Muthen,
I am having a convergence problem--iteration exceeded. However, the exact same model that I ran using the same version of Mplus two weeks ago had no convergence problem. I do not understand who this happens. Would you please give me a hint? I am sorry for the trivial question.
Sarah Lowe posted on Saturday, March 15, 2014 - 10:07 am
Hi Drs. Muthen,
I am running a three-wave, three-variable cross-lagged model that runs fine when all of the variables are included as continuous. However, two of the variables are counts of different types of events (x 3 waves = 6 count indicators total), and our reviewers would like to model them as such. Each count variable is modeled as a single indicator onto a latent variable with mean set at 0.0 and variance set a 1.0.
I have tried the following to facilitate convergence: - Using montecarlo integration (and altering the # of integration points) - Increasing the # of iterations (to 10,000) - Changing start values for parameters that have strange final starting values - Changing start values for all parameters - Various combination of the above
Most recent error message:
THE MODEL ESTIMATION DID NOT TERMINATE NORMALLY DUE TO AN ERROR IN THE COMPUTATION. CHANGE YOUR MODEL AND/OR STARTING VALUES.
In our bivariate models [each including one type of event, and the symptom inventory], the pattern of results were consistent whether we modeled events as counts or continuous variables, so I was thinking of asking the editor if we could use continuous variables to facilitate convergence. However, I figured I would see if you have any any additional suggestions/insights before I do that.
I would not put a factor behind the count variables. Use them as observed count variables. I believe this could be part of the problem.
Sarah Lowe posted on Sunday, March 16, 2014 - 7:27 pm
Hi Linda, Thanks for your response. The reason why we put factors behind the count variables was because we wanted to include within-wave covariances. When we ran the model with just counts, we got this error:
*** ERROR in MODEL command Covariances for count variables with latent variables are not allowed on the within level. Problem with the statement: AVOID3 WITH COUNT3
Does that make sense?
All of the other models we have used with count variables modeled as latent variables thus far have worked... I think that there is something about including 2 times the number of count variables that is leading to convergence problems. Does this sound correct to you? Any advice on how to proceed?
There are no negative variances/residual variances in the unstandardized/standardized estimates.All the regression co-efficients are in their expected direction and the interaction term is significant.
However,the variance for one variable (interaction term) is large (expressed as ****)in the output but its z-value=93.908*** in the unstandardized solution. And variance= 1 in the standardised solution.
Similarly in CONFIDENCE INTERVALS OF MODEL RESULTS: only the Lower .5% variance is large (expressed as ****). Rest of all confidence interval variances are available.
Under these circumstances is it ok to trust the model result as it gives me all the model fit and parameter estimates?
In continuation to my above question on increasing iteration due to non-convergence in the modearted mediation of continuous variables. I have tried to redifine the interaction term by dividing it with 100. and got the same model fit as I get when I increase the number of iterations.
But by redifining the interaction term which gave a large variance I now get the value for the variance as well.
Just wondering which way should I proceed and which is more correct?
Is it ok to re-difine the interaction term when it is of the most importance and significance in the model?
Thanks for your input in advance and sorry for posting simultaneously.
I am running sample statistics for several continuous variables. One of the variables had a large variance and was not on the same scale as the others. I standardized all variables and tried to run the sample stats again. I am still receiving an error regarding convergence. I tried the tips from the MPlus manual regarding convergence issues. I increased the number of iterations and used the preliminary parameters as starting values. Could you please give me some guidance with where to go next? Thanks so much.
I also receive the above NO CONVERGENCE message when running a SEM with 2 latent variables and 3 independent variables (x y z). The model runs fine with certain combinations of covariance between the independent variables (eg x z, y z) but not with the final combination (x y). These two variables are quite highly correlated - could this explain the problem?
I'm trying to run a SEM with 1 independent variable, 6 dependent variables and 4 continuous latent variables, however I can't seem to get it to converge (WARNING: NO CONVERGENCE. NUMBER OF ITERATIONS EXCEEDED.). I've tried changing the estimator type, increasing the number of iterations, decreasing the convergence criteria, and increasing the number of random starts. I'm not too sure what else I should try (if in fact I have run everything correctly). Can you make any suggestions?
I am trying to run a transition model with two time points similar to an LTA except that the individual units are Factor Mixture Models (FMM) instead of Latent Class Analyses. The FMMs at each time point run without issues, and 4 classes for each FMM are ideal based on model fit statistics and interpretability.
Unfortunately, when I combine the FMMs into a transition model using the syntax "c2 on c1" and constrain the factor loadings at both time points to be the same (factor loadings at both time points are roughly the same), the model does not run and results in a non-significant negative factor variance. If I fix that variance to zero, the model runs but the log-likelihood (LL) values do not replicate. I'm using a sample of size N=10,000 and the transition model has 271 parameters.
I noticed from the TECH8 output, that one of the final stage optimizations appears to converge to a stable solution before the absolute change increases from ~5 to >4000 at the final iteration and the algorithm changes from EM to QN. The class counts also change drastically at the last iteration. I was wondering if you have any thoughts on how I can get the LL to replicate for my latent transition FMM, and if the instability of the final stage optimization could direct me towards the issue. Thanks for your help.