Monte Carlo power simulation question PreviousNext
Mplus Discussion > Multilevel Data/Complex Sample >
Message/Author
 Nate Williams posted on Thursday, January 04, 2018 - 9:49 am
In preparation for a much larger simulation study focused on power for multilevel mediation models, some colleagues and I ran a Monte Carolo simulation study with only 50 reps. (This is obviously way too low for the actual study but we were testing.) We were surprised when the simulation gave us an estimate of power for the parameter of interest based on 28 completed replications out of 50 but then showed that there were 50 error messages listed—one for every replication. This made us wonder, why were 28 replications considered successful and included in the summary of results for the power analysis even though all 50 replications had an error message of some kind? Thank you very much in advance for any thoughts on this!
 Nate Williams posted on Thursday, January 04, 2018 - 10:13 am
Dear Profs. Muthén,
Some colleagues and I are interested in running a very large number of multilevel Monte Carlo simulations for a power analysis and we’re trying to figure out how to reduce the computational time it takes for the simulations to run. One approach we considered was restricting the number of iterations for one (or more) of the different estimation processes so that models that aren’t going to converge would fail quickly. For example, we considered limiting the number of iterations for the options “ITERATIONS” “SDITERATIONS” “H1ITERATIONS” and/ or “MITERATIONS.”

We’ve done some preliminary testing and found that restricting “ITERATIONS” halves the computational time with almost no loss of completed replications; however, the estimate of power is slightly different (.148 vs. .179) with a very small number of replications (i.e., 50 reps).

We have two questions/ concerns: 1. Will restricting these iteration options substantively influence the results of the power simulation (once we're working with large numbers of reps)? Could you point us toward a good reference for better understanding this?

2. Which of these processes (if any) would be best to restrict if we’re hoping to improve our processing efficiency without changing the results of the power simulation?
 Tihomir Asparouhov posted on Thursday, January 04, 2018 - 5:12 pm
On the first question above regarding the 50 error messages. Mplus will produce a warning in some replications and error messages in other replications. The warning messages are just that - in principle there is nothing wrong with the analysis so the computation will finish completely but external procedures and checks have failed. Those in general do not imperil the model estimation and results. You have to really note the content of the messages to determine the situation. In your case, most likely, 22 replications did not converge and you will get 22 non-convergence messages. In the other 28 replications the model converged fine but most likely you have more model parameters than clusters. This causes our most reliable model identifiability check (the MLF check) to fail, it basically can not determine if the model is not identified or if the sample size is too small for the check to work. If you know that the model is identified in principle (regardless of the amount of data) then you should just ignore the MLF warning.

1. What happens here is that you are dealing with a small number of replications that are on the verge of being problematic. They are not clearly convergent and not clearly non-convergent. If the number of replications is small enough than the power shouldn't really change much. In general, I don't have a clear answer of which way you want to treat these semi convergent replications. We have had satisfactory success treating them as convergent and in some cases especially montecarlo studies we have had better results treating them as non-convergent. These semi-convergent replications in principle will become most likely non-convergent if the convergence criteria are very strict so the optimization procedure runs to the end or if the maximum allowed number of iterations is small. These replications are also highly sensitive to the convergence criteria. If the convergence criteria are not very strict the reported model is essentially an approximation model which in many cases can be considered good enough for inference (the most typical case involves very high correlations between random effects: strict convergence criteria yield correlation of 1 and non-convergence, not very strict convergence criteria yield correlation of 0.99 and a converging model).

2. The two main driving options are miter and mconv. In some rare cases logcriterion can also be helpful.

Note also that if you are experiencing a large rate of convergence problems, you can consider using the Bayes estimator, which deals better with a large number of random effect models and high correlations between the random effects. This method is particularly powerful for the small sample size situations especially when paired with weakly informative priors.
 Nate Williams posted on Friday, January 19, 2018 - 1:45 pm
Hello,
Thank you for these replies, they are very helpful.

As a follow-up, we were wondering how we might tell the difference between an "error message" and a "warning."

The simulation output seems to label all messages as "errors" when it says:

"TECHNICAL 9 OUTPUT

Error messages for each replication (if any)"


However, some of the messages are subsequently labeled warnings:


"REPLICATION 2:
WARNING: THE MODEL ESTIMATION..."


Is it safe to assume that all "warnings" are labeled as such in the ensuing 48 replication messages?
 Nate Williams posted on Friday, January 19, 2018 - 1:48 pm
One more follow-up in regard to the MLF warning that you mentioned: Is this message the MLF warning?:

THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES MAY NOT BE
TRUSTWORTHY FOR SOME PARAMETERS DUE TO A NON-POSITIVE DEFINITE
FIRST-ORDER DERIVATIVE PRODUCT MATRIX. THIS MAY BE DUE TO THE STARTING
VALUES BUT MAY ALSO BE AN INDICATION OF MODEL NONIDENTIFICATION. THE
CONDITION NUMBER IS 0.602D-17. PROBLEM INVOLVING THE FOLLOWING PARAMETER:Parameter 16, %BETWEEN LEVEL3%: Y3 (equality/label)

THE NONIDENTIFICATION IS MOST LIKELY DUE TO HAVING MORE PARAMETERS THAN THE NUMBER OF LEVEL 3 CLUSTERS. REDUCE THE NUMBER OF PARAMETERS.

Thank you kindly for your assistance with this!
 Bengt O. Muthen posted on Friday, January 19, 2018 - 5:38 pm
The heading

Error messages for each replication (if any)"

also refers to warnings. Replications with errors are not included in the summaries but warnings are.

That MLF warning is specific to 2-level modeling. There is also an MLF warning about binary x's brought into the model which can be ignored.
 ZHIYAO YI posted on Tuesday, January 28, 2020 - 8:52 pm
Dear Dr. Muthen and Asparouhov,

I am doing a simulation study with DSEM models. I generated and saved data, then analyzed them separately. However, there are many datasets did not result in a completed replication (e.g., 474 out of 500). I read this problem also occured in a previous study (Mårten Schultzberg & Bengt Muthén, 2018). Therefore, the summary of the results are biased. Is there Mplus code or a way that I can use to just analyze results of the completed replication(exclude the incomplete ones)?If not, could you recommend me a way to solve this issue.

Thank you!
 Tihomir Asparouhov posted on Wednesday, January 29, 2020 - 9:09 am
If you are using Mplus external montecarlo the cases that did not converge are already excluded from the results section.

You could possibly use weakly informative priors for the model parameters to try and improve the convergence rate or use more iterations. In most cases, convergence problems and bias occur because the data and the model don't match: the model is too ambitious for the data or the data is too small for the model.
 ZHIYAO YI posted on Thursday, January 30, 2020 - 10:49 am
Thank you, Dr. Asparouhov.

I already set biter=(5000). If there are error messages in tech 9 output like this:

Errors for replication with data file C:\Desktop\50\25\model2.rep1.dat:

THE MODEL ESTIMATION TERMINATED NORMALLY

USE THE FBITERATIONS OPTION TO INCREASE THE NUMBER OF ITERATIONS BY A FACTOR OF AT LEAST TWO TO CHECK CONVERGENCE AND THAT THE PSR VALUE DOES NOT INCREASE.

does it mean there is something wrong with my code?
 Tihomir Asparouhov posted on Thursday, January 30, 2020 - 12:51 pm
That message indicates convergence. You should be able to find 474 of these messages in tech9 and 26 message that are of different kind, which might be useful in figuring out the reasons for the convergence problems, although these message are cryptic and not very useful. In most cases, convergence problems and bias occur because the data and the model don't match: the model is too ambitious for the data or the data is too small for the model.
 ZHIYAO YI posted on Thursday, January 30, 2020 - 3:30 pm
Thank you so much for the clarification.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: