Message/Author 


In preparation for a much larger simulation study focused on power for multilevel mediation models, some colleagues and I ran a Monte Carolo simulation study with only 50 reps. (This is obviously way too low for the actual study but we were testing.) We were surprised when the simulation gave us an estimate of power for the parameter of interest based on 28 completed replications out of 50 but then showed that there were 50 error messages listed—one for every replication. This made us wonder, why were 28 replications considered successful and included in the summary of results for the power analysis even though all 50 replications had an error message of some kind? Thank you very much in advance for any thoughts on this! 


Dear Profs. Muthén, Some colleagues and I are interested in running a very large number of multilevel Monte Carlo simulations for a power analysis and we’re trying to figure out how to reduce the computational time it takes for the simulations to run. One approach we considered was restricting the number of iterations for one (or more) of the different estimation processes so that models that aren’t going to converge would fail quickly. For example, we considered limiting the number of iterations for the options “ITERATIONS” “SDITERATIONS” “H1ITERATIONS” and/ or “MITERATIONS.” We’ve done some preliminary testing and found that restricting “ITERATIONS” halves the computational time with almost no loss of completed replications; however, the estimate of power is slightly different (.148 vs. .179) with a very small number of replications (i.e., 50 reps). We have two questions/ concerns: 1. Will restricting these iteration options substantively influence the results of the power simulation (once we're working with large numbers of reps)? Could you point us toward a good reference for better understanding this? 2. Which of these processes (if any) would be best to restrict if we’re hoping to improve our processing efficiency without changing the results of the power simulation? 


On the first question above regarding the 50 error messages. Mplus will produce a warning in some replications and error messages in other replications. The warning messages are just that  in principle there is nothing wrong with the analysis so the computation will finish completely but external procedures and checks have failed. Those in general do not imperil the model estimation and results. You have to really note the content of the messages to determine the situation. In your case, most likely, 22 replications did not converge and you will get 22 nonconvergence messages. In the other 28 replications the model converged fine but most likely you have more model parameters than clusters. This causes our most reliable model identifiability check (the MLF check) to fail, it basically can not determine if the model is not identified or if the sample size is too small for the check to work. If you know that the model is identified in principle (regardless of the amount of data) then you should just ignore the MLF warning. 1. What happens here is that you are dealing with a small number of replications that are on the verge of being problematic. They are not clearly convergent and not clearly nonconvergent. If the number of replications is small enough than the power shouldn't really change much. In general, I don't have a clear answer of which way you want to treat these semi convergent replications. We have had satisfactory success treating them as convergent and in some cases especially montecarlo studies we have had better results treating them as nonconvergent. These semiconvergent replications in principle will become most likely nonconvergent if the convergence criteria are very strict so the optimization procedure runs to the end or if the maximum allowed number of iterations is small. These replications are also highly sensitive to the convergence criteria. If the convergence criteria are not very strict the reported model is essentially an approximation model which in many cases can be considered good enough for inference (the most typical case involves very high correlations between random effects: strict convergence criteria yield correlation of 1 and nonconvergence, not very strict convergence criteria yield correlation of 0.99 and a converging model). 2. The two main driving options are miter and mconv. In some rare cases logcriterion can also be helpful. Note also that if you are experiencing a large rate of convergence problems, you can consider using the Bayes estimator, which deals better with a large number of random effect models and high correlations between the random effects. This method is particularly powerful for the small sample size situations especially when paired with weakly informative priors. 


Hello, Thank you for these replies, they are very helpful. As a followup, we were wondering how we might tell the difference between an "error message" and a "warning." The simulation output seems to label all messages as "errors" when it says: "TECHNICAL 9 OUTPUT Error messages for each replication (if any)" However, some of the messages are subsequently labeled warnings: "REPLICATION 2: WARNING: THE MODEL ESTIMATION..." Is it safe to assume that all "warnings" are labeled as such in the ensuing 48 replication messages? 


One more followup in regard to the MLF warning that you mentioned: Is this message the MLF warning?: THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES MAY NOT BE TRUSTWORTHY FOR SOME PARAMETERS DUE TO A NONPOSITIVE DEFINITE FIRSTORDER DERIVATIVE PRODUCT MATRIX. THIS MAY BE DUE TO THE STARTING VALUES BUT MAY ALSO BE AN INDICATION OF MODEL NONIDENTIFICATION. THE CONDITION NUMBER IS 0.602D17. PROBLEM INVOLVING THE FOLLOWING PARAMETER:Parameter 16, %BETWEEN LEVEL3%: Y3 (equality/label) THE NONIDENTIFICATION IS MOST LIKELY DUE TO HAVING MORE PARAMETERS THAN THE NUMBER OF LEVEL 3 CLUSTERS. REDUCE THE NUMBER OF PARAMETERS. Thank you kindly for your assistance with this! 


The heading Error messages for each replication (if any)" also refers to warnings. Replications with errors are not included in the summaries but warnings are. That MLF warning is specific to 2level modeling. There is also an MLF warning about binary x's brought into the model which can be ignored. 

ZHIYAO YI posted on Tuesday, January 28, 2020  8:52 pm



Dear Dr. Muthen and Asparouhov, I am doing a simulation study with DSEM models. I generated and saved data, then analyzed them separately. However, there are many datasets did not result in a completed replication (e.g., 474 out of 500). I read this problem also occured in a previous study (Mårten Schultzberg & Bengt Muthén, 2018). Therefore, the summary of the results are biased. Is there Mplus code or a way that I can use to just analyze results of the completed replication(exclude the incomplete ones)？If not, could you recommend me a way to solve this issue. Thank you！ 


If you are using Mplus external montecarlo the cases that did not converge are already excluded from the results section. You could possibly use weakly informative priors for the model parameters to try and improve the convergence rate or use more iterations. In most cases, convergence problems and bias occur because the data and the model don't match: the model is too ambitious for the data or the data is too small for the model. 

ZHIYAO YI posted on Thursday, January 30, 2020  10:49 am



Thank you, Dr. Asparouhov. I already set biter=(5000). If there are error messages in tech 9 output like this: Errors for replication with data file C:\Desktop\50\25\model2.rep1.dat: THE MODEL ESTIMATION TERMINATED NORMALLY USE THE FBITERATIONS OPTION TO INCREASE THE NUMBER OF ITERATIONS BY A FACTOR OF AT LEAST TWO TO CHECK CONVERGENCE AND THAT THE PSR VALUE DOES NOT INCREASE. does it mean there is something wrong with my code? 


That message indicates convergence. You should be able to find 474 of these messages in tech9 and 26 message that are of different kind, which might be useful in figuring out the reasons for the convergence problems, although these message are cryptic and not very useful. In most cases, convergence problems and bias occur because the data and the model don't match: the model is too ambitious for the data or the data is too small for the model. 

ZHIYAO YI posted on Thursday, January 30, 2020  3:30 pm



Thank you so much for the clarification. 

Back to top 