How to monitor progress without using... PreviousNext
Mplus Discussion > Latent Variable Mixture Modeling >
Message/Author
 Jon Heron posted on Monday, November 20, 2006 - 5:23 am
Hi,


I use tech8 when performing multiple random starts to keep track of how long the program has left to run.

The problem is that this can create rather large textfiles
(my record is 0.5 Gig - my poor PCwas not impressed!)

Is there another way I can monitor progress without upsetting my PC?


cheers


Jon
 Jon Heron posted on Monday, November 20, 2006 - 5:52 am
On a related note, I can no longer turn tech8 off!
 Linda K. Muthen posted on Monday, November 20, 2006 - 6:57 am
We also noted the huge text files with TECH8, so now you always get TECH8 to the screen but you only get TECH8 to the output file if you request TECH8 in the OUTPUT command. We felt users would always want to monitor their progress on screen even when they did not want the technical details in the output file. How do you feel about this?
 Jon Heron posted on Monday, November 20, 2006 - 7:25 am
I think that's a perfect solution!
 Jon Heron posted on Monday, November 27, 2006 - 3:49 am
... unless you're running a BLRT

I use the initial bit of Tech14 output
to establish whether I've located the
optimal n/n-1 class models + then I stop
and try a new OPTSEED if this is not the case.

If I can't turn off tech8 then the output
whizzes past so quickly that I can't make
this judgement.
 Linda K. Muthen posted on Monday, November 27, 2006 - 10:13 am
Our most recent suggestion regarding TECH14 which is under TECH14 in the user's guide on the website is to first find a replicated solution without using TECH14. Then use OPTSEED and TECH14 in conjunction with the LRTSTARTS option.
 Jon Heron posted on Tuesday, November 28, 2006 - 12:44 am
Hi Linda,

I find optimal/replicated solutions for n and n-1 class models without using Tech14
and then bring in tech14 to carry out the BLRT.

Unfortunately, not all optseeds that replicate the n-class model will recreate the optimal model for *both* n and n-1 classes.

One way to ensure that it does, appears to be to make sure the classes for the n-class model are ordered in increasing size, however this is not always possible (in my example at least).

A quicker alternative I have found is to ensure that the H0LL and H1LL values that are quoted at the start of a tech14 run correspond to the likelihoods for the replicated n and n-1 class models. I then know that the BLRT is going to be comparing the two models I want it to.

I am now experimenting with different LRTSTARTS/LRTBOOTSTRAP options to see if I can get it to stop after quoting these initial H0LL/H1LL.
 Linda K. Muthen posted on Tuesday, November 28, 2006 - 9:40 am
It is not necessary to have an OPTSEED for the k-1 class model. It is only necessary to have it for the k class model. The checking of the loglikelihoods is done automatically.
 Jon Heron posted on Wednesday, November 29, 2006 - 12:07 am
I don't have an OPTSEED for the k-1 class model.
I use the OPTSEED for the k class model and find that it often wont recreate the optimal k-1 class model when it comes to the BLRT.

I wonder if this is something to do with the settings we've been using:

lrtstarts 0 0 150 15;
lrtbootstrap 100;
 Linda K. Muthen posted on Wednesday, November 29, 2006 - 9:49 am
The OPTSEED option is only for the k class model. I think you should modify the LRTSTARTS option. The first two numbers are for the k-1 class analysis. The last two are for the k class analysis. I would try

LRTSTARTS = 2 1 150 15;

If that does not work, I would increase the last two numbers. You may just have a difficult model. Some are tougher than others.
 Jon Heron posted on Thursday, November 30, 2006 - 5:29 am
Hi Linda,

we may be going round in circles due to my lack of understanding, so thankyou for your patience.

I have found that the k-1 class model referred to with 'H0 Loglikelihood Value ' in the BLRT output is strongly dependent on the ordering of classes for the k class model (and hence on the OPTSEED which generates the k-class model).

The restriction that the largest class is last (as described in the manual) does not seem sufficient for my model - I have found that the only way to be certain of obtaining the correct k-1 class model is to ensure monotonically increasing class sizes within the k class model.

Hence I have come up with a way of running a quick BLRT to ensure that the correct models are being referred to, and then running a longer BLRT to estimate the p-value. This is quicker than attempting (and often failing) to have the k-classes in increasing order of size.
 Linda K. Muthen posted on Thursday, November 30, 2006 - 8:39 am
In our experience, ordering the classes is not necessary. If you would like, you can send your input, data, output, and license number to support@statmodel.com and we can see why you need to do this.
 Matt Moehr posted on Tuesday, March 06, 2007 - 8:43 am
Linda,

I was wondering if/how this thread was resolved because I have a related question.

In my case, I used the strategy of choosing a model based on BIC and then confirming with BLRT (tech14). I think the correct solution is somewhere between 3 and 5 classes. The BIC for the four models M5, M4, M3, M2 in the same order: 4831, 4784, 4784, 4809. All of the models seemed to have stable class counts and were well replicated with starts=100 25.

When I started using BLRT, the results seem much less clear cut. I think the root of the problem is that the log likeliehood reported in the tech14 section of the output is not the same as the LL I got when I was using the BIC criteria. For example when running a model with 5 classes, the H0 model is a 4-class solution and tech14 shows:

H0 Loglikelihood Value -2139.25

But when I separately estimated the 4-class model, the replicated LL was -2128.5. So two questions:
1) Is it a problem that H0LL != LL(n-1) ?

2) Can I compute the LR test statistic based on the replicated LL values (kind of like a naive chi-square), and then compare that LR to the distribution of bootstrap draws?
 linda beck posted on Thursday, August 07, 2008 - 9:26 am
Using the OPTSEED from am model without Tech14 is intended (sometimes) for not doing the k-class analysis again, when using tech14. But this k-class solution derived from the optseed should be the same as doing the k-class analysis again, am I right!?
I ask this, because my k-class model is already very complex and time consuming. I directly want to compute BLRT without computing the stable k-class solution again, so optseed is the choice?
 Bengt O. Muthen posted on Friday, August 08, 2008 - 8:43 am
Yes, The OPTSEED run will give exactly the same solution (check the log likelihood) as the k-class analysis from which you got the OPTSEED value.
 Harald Gerber posted on Friday, October 24, 2008 - 10:33 am
Is ordering of classes necessary when using tech14 and tech11? From your experience, any news since 2006? Unfortunately, I have solutions where the last class is extracted first, but I get the impression, that BLRT and LRT are not influenced by that.
 Bengt O. Muthen posted on Friday, October 24, 2008 - 6:22 pm
No ordering needed.
 J.D. Haltigan posted on Friday, April 20, 2018 - 4:16 am
Oddity I am wondering if any others have noticed: If I request Tech8 in the output and then remove it and rerun the model, it still prints to the output. Do I need to manually delete the original output file or should the new (much smaller output file) overwrite the old one?
 Linda K. Muthen posted on Friday, April 20, 2018 - 3:40 pm
Please send the output without the TECH8 option where TECH8 is printed and your license number to support@statmodel.com.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: