Random Starts PreviousNext
Mplus Discussion > Growth Modeling of Longitudinal Data >
 Jungeun Lee posted on Friday, December 14, 2007 - 4:38 pm

I am working on a growth mixture model (outcome=continuous, estimator=MLR, type=Mixture Missing). When I increased the number of class=4, I encountered the following error. In my current mplus input for this model, STARTS =500 20. I am puzzled about what I can do about this...

 Bengt O. Muthen posted on Friday, December 14, 2007 - 5:51 pm
You can increase the number of random starts further until the best LL is replicated. If you have problems replicating it for many random starts, this might indicate that you are trying to extract too many classes - the data don't show signs of that many classes.
 linda beck posted on Monday, August 18, 2008 - 3:07 am
I have a very complex 3 class model and used starts = 1500 50.

my loglikelihoods were:


Since I get no "loglikelihood warning message", can I trust this solution with only imprecisely replicated values? I'm aware of the possibility of extracing too many classes with this 3 class solution (as posted above). I only want to use the BIC and tech11 here, in order to have arguments for my preferred 2 class solution, where I had no problems to replicate loglikelihood values exactly.

thanks, linda
 Bengt O. Muthen posted on Monday, August 18, 2008 - 6:37 am
The way to check if the first LL is close enough to the second is to check if it gives approximately the same solution in terms of parameter estimates. This in turn can be determined by using the seed for the second LL as "OPTSEED" in a new run where you inspect the parameter estimates and compare them to those of the first LL run.
 linda beck posted on Tuesday, August 19, 2008 - 2:10 am
thank's for that advice. The second LL isolates different kinds of classes, which was often the case when I tried to compute 3 class models with that data.
Despite that instability also speaks for 2 classes, what can one do to force LL replication? I have heard of increasing both values in the starts option, increasing stiterations, increasing number of integration points... Did I miss an option?
 Linda K. Muthen posted on Tuesday, August 19, 2008 - 7:28 am
You can increase starts to as many as 5000 100 and decrease MCONVERGENCE. If this does not help, you may need to use a simpler model.
 linda beck posted on Tuesday, August 19, 2008 - 9:44 am
Thanks, I've done exactly what you recommended before you posted it, funny... It is still running. One additional question: How can one deal with nonconverging perturbed starting values? There was a message that some did not converge under the list of LL-values (I guess this is for the final stage optimizations) and above the list of LL-values.
 linda beck posted on Tuesday, August 19, 2008 - 9:47 am
add: in another post Bengt recommended to switch to stscale=1. Is that an option?
 Linda K. Muthen posted on Tuesday, August 19, 2008 - 11:05 am
If you increase the starts to 5000 it will take more time than 100. If you have further questions about output, send your files and license number to support@statmodel.com.

You could try STSCALE=1; You can also send your files and license number to support@statmodel.com.
 Carey posted on Sunday, February 08, 2015 - 10:21 am
I am slightly confused with how many random starts to use in my LPA. Should the random starts change the solution? If it does, what does it mean?
 Linda K. Muthen posted on Sunday, February 08, 2015 - 11:28 am
You should use enough random starts so that you replicate the best loglikelihood. If you do not, you have reached a local solution. Use, for example, STARTS = 200 50 or more where the second number is 1/4 of the first number.
Back to top
Add Your Message Here
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Options: Enable HTML code in message
Automatically activate URLs in message