Hi, as we are currently checking the performance of several systems we just have two short questions: Are the files for example 1 and 2 mentioned in the timing comparison available? Secondly, what is the differences for the measurements for those two examples in the first two rows of this comparison: As first row is labeled 4.1 (which I would guess refers to User Guide Example 4.1 and is run implicitly with 1 processor). So the column Example 1 then also run with one processor ( 08:45), which should also be the case for the second row labeled 4.2/Process=1 (04:44). Can you shortly enlighten my understanding about this setup? Thanks very much Jörg Schad
I am running a CFA mixture model (example 7.17) with 3 factors and 20 factor indicators (categorical data). My computer is a Pentium 4 (CPU 3.00 GHz, 2.49 GB of RAM, with only one processor). I have Mplus 32-bit (version 5.1) running on 32-bit Windows (Windows XP, Service Pack 3).
According to your website (http://www.statmodel.com/sysreq.shtml), it is possible to allocate a maximum of 2GB of total memory (RAM and virtual memory) for Mplus 32-bit on 32-bit Windows. When I’m running my models, the “Task Manager” of windows indicates that I’m using only 220,000 K and 50 CPU. I was able to increase the CPU to 100 by using the command:
Process = 2;
… which does reduce by 15% the amount of time for each iterations (even if I don’t have a dual-core processor). However, I’m still using only 220,000K of memory according to the “Task Manager”. My question is how I can increase the maximum of ram to reach 2GB of total memory for my analysis?
I never made any change to the booting configuration of my system. I always start my analysis after I boot my computer. I always run my analysis with nothing else open. I also make sure to close all the little processes that use memory and that are not related to windows. Below, I made 3 images that demonstrate my shortage of memory.
Do you know if there is a way I can fix this problem?
Note: The only way I found that I can increase the memory is by adding “PROCESSORS = 2 (STARTS);” (or inserting 3,4 or 5 instead of 2) and many threads will open and each will use 220,000k. However, because I have only one processor, the CPU is distributed more or less equally among the different threads and it doesn’t improve the speed of the analysis.
This is not something that Mplus can fix. We use all available memory. You would need to speak to a hardware technician to see why all memory is not being used. Perhaps Mplus does not need more memory than it is using.
I had a chat with a IT and stats guy and we made several tests on different machines, we also run models after reducing the sample, adding/removing variables, removing the number of factors, and after collapsing categories. We did in some occasion reached 240,000k on computer with one processor but we were never closed to use the maximum of 2GB of total memory. Only with a dual-core processor we did reached 480,000k.
Given that I am able to reach 600,000K with SAS on my machine (only one processor) for other analysis that I do, we came to the conclusion that Mplus does not use all the memory that is available. Like you suggest, probably "Mplus does not need more memory than it is using". I understand having a dual-core processor would speed up my analysis but given it takes 15 hours for my CFA mixture model (950 cases, 3 factors and 20 categorical indicators) to run, I thought having 2.5 GB of RAM would compensate a little.
Memory is not as important as the speed and number of processors. Also, there is a STARTS setting to the PROCESSORS option that also helps with speed.
That being said, often when model estimation takes a long time, the problem is with the model. If you would like us to see if this is the case for you, please send your input, data, output, and license number to email@example.com.
This discussion is old, how much has changed in the meantime?
Mplus is notably faster than any other SEM software I know, showing how well-developed the algorithms in Mplus are!
Still, some analyses (e.g. two-level analysis with four MCMC chains using 1.5 million iterations and a large sample size take approximately two days for me. (iMac 3.2 GHz Intel Core i5, Turbo Boost up to 3.6GHz, four processors identified with process=4, more than sufficient RAM available).
The gains by upgrading to a better processors seem limited (compared to the cost). Which parameters are most important for speed in the current version of Mplus?
Not much has changed. Currently using proc=4 with chain=4 would be optimal in your case (generally proc=core rather than proc=threads is better). Generally we prefer Intel Core i7 (4 cores 8 threads). Almost all of our machines are like that. For slightly more money you can get a desktop with i7-8700 with six cores and 12 threads. For more money Intel Core i9 is already available (but at this point I do not see this as a perfect match between Mplus Bayes algorithms and the many cores of i9). We may soon change the algorithms though to take advantage of the more cores available on the newer machines.
That said, I would go back to the last statement from Linda above - better hardware is no substitute for some human input. We never run 1.5 mil iterations. If a model has not converged in 50000 iterations, there is a problem with the model. The model is somewhat poorly identified is almost all such cases and we would recommend to look into modifying the model and the particular parameter that is slow to converge (reported in tech8). Weakly informative priors can also help with stabilizing a model and eliminating computational time spent in the tails of the posterior distributions.