Message/Author |
|
Scott Weaver posted on Thursday, September 07, 2006 - 3:16 pm
|
|
|
Hello, I am tring to estimate a multinomial logistic model (simultaneously across 13 cohorts/groups). The outcome variable is observed and contains 3 categories. The predictors in the model includes a latent factor and 3 observed covariates. There are missing data on the covariates. The covariates are explicitly incorporated into the model, and I, thus, need to use montecarlo integration. I am getting this error message: *** FATAL ERROR THERE IS NOT ENOUGH MEMORY SPACE TO RUN THE PROGRAM ON THE CURRENT INPUT FILE. THE ANALYSIS REQUIRES 3 DIMENSIONS OF INTEGRATION RESULTING IN A TOTAL OF 0.10000E+04 INTEGRATION POINTS. THIS MAY BE THE CAUSE OF THE MEMORY SHORTAGE. YOU CAN TRY TO FREE UP SOME MEMORY BY CLOSING OTHER APPLICATIONS THAT ARE CURRENTLY RUNNING. ANOTHER SUGGESTION IS CLEANING UP YOUR HARD DRIVE BY DELETING UNNECESSARY FILES. I reduced MONTE (1000) to MONTE (500), but got a similar (though a little different) error message. . I am using a P4 computer with 1 GB of RAM and relatively few background programs running. What can I do or what specs for a computer do I need in order to run this model? If I do not bring the covariates into the model (leave them as X variables) and use standard integration, the model runs, but I loose over 1000 cases (out of N > 20,000) who are missing data on the covariates. Thank you! Scott |
|
|
3 dimensions of integration combined with a large sample size of about 20K can cause this. If the missing on the covariates is important, I would try to do multiple imputations for the covariates so that you get a unidimensional integration problem. You can handle multiply imputed data in Mplus. |
|
|
Hello, I am receiving the "THERE IS NOT ENOUGH MEMORY SPACE TO RUN Mplus ON THE CURRENT INPUT FILE. YOU CAN TRY TO FREE UP SOME MEMORY BY CLOSING OTHER APPLICATIONS THAT ARE CURRENTLY RUNNING. NOTE THAT THE MODEL MAY REQUIRE MORE MEMORY THAN ALLOWED BY THE OPERATING SYSTEM." error message, and I'm not sure why I am, considering the analysis I am running. My machine meets the system requirements listed on the statmodel website. I have about 450 participants, 120 variables, 72 of which are categorical. I am fitting the 72 categorical variables (declared as such in my input) onto 12 factors, with 6 variables per factor in a CFA. Missing data is very minimal, and the observed variables are not very skewed or kurtotic. The factor structure is "known" as this is a measure that's been around for some time. My goal is to work toward a second-order factor analysis, which I can't do without fitting the first-order factors. I have built up the model, factor by factor, but when I get up around 7 factors it slows down a lot, and when I try to fit all 12 I receive that error message. I am surprised that I am receiving this message, as I have run much more complex analyses in MPlus before. This makes me think that I might be missing something? Any suggestions would be appreciated. |
|
|
Please send the output and your license number to support@statmodel.com. |
|
|
Hell, I am tring to get estimations of a forced-choice questionnaire. The questionnaire is partial ranking format,so participants selected two(most-least) of four categories(items). I have 20 factors, 136 items, and about 5000 participants. When using ULSMV or WLSMV with parameterization=theta, I get the following message. INPUT READING TERMINATED NORMALLY Test ipsative *** FATAL ERROR THERE IS NOT ENOUGH MEMORY SPACE TO RUN Mplus ON THE CURRENT INPUT FILE. YOU CAN TRY TO FREE UP SOME MEMORY BY CLOSING OTHER APPLICATIONS THAT ARE CURRENTLY RUNNING. NOTE THAT THE MODEL MAY REQUIRE MORE MEMORY THAN ALLOWED BY THE OPERATING SYSTEM. REFER TO SYSTEM REQUIREMENTS AT www.statmodel.com FOR MORE INFORMATION ABOUT THIS LIMIT. I would highly appreciate it if you could give me some advice. |
|
|
The the ULS estimator. This is a large problem depending on the configuration of your computer. |
|
Scarlett Lee posted on Wednesday, February 06, 2013 - 9:17 am
|
|
|
My computer has 8GB of memory. If I wanna compute a large model and big data, doesn't it run depending on the computer and does adding more RAM to my computer make it run? Thanks. |
|
|
How much RAM is used depends on the operating system. See System Requirements on the website for further information about how much RAM can be accessed. |
|
|
We are having an "out of memory" problem for an EFA with 178 with dichotomous indicators specified as categorical and estimation using the default WLSMV. The sample size is 50,000 and it won't run even for 1 factor. We are able to get it to run for ULS, but why the out of memory problem for WLSMV? Any ideas? |
|
|
The weight matrix for this problem which is needed for chi-square and the standard errors is vary large. You can use ULSMV if you want standard errors and chi-square. I think that will work. Or you can use WLSMV with the NOCHI and NOSERROR options of the OUTPUT command. |
|
|
Or, with a small number of factors, use ML. Or, use Bayes. |
|
Cecily Na posted on Friday, August 30, 2013 - 6:06 pm
|
|
|
Hello professors, I'm trying to do multiple imputation for a data set with 50-60 variables, some of which are categorical. The sample size is 12,000. I requested 5 imputed data sets. Then I received an error message of short of memory space. I have 16GB of memory space. What can be the problem and the solution? Also, how can I impute dichotomous variables, such as gender? Do I classify them as nominal? And what type of analysis (basic?) should I choose? Thank you very much! |
|
|
Please send the input to support@statmodel.com. You can specify that a variable is categorical in the IMPUTE list. |
|
|
Hello, I'm new to Mplus and also to Latent Class Analysis. However I tried to do a LCA with Stata (gllamm and LCA plugin) which took too much time. My tutor gave me the tip to change to MPLus. Now I get the error message that I don't have enough memory. I already collapsed my data to create an frequency weight (1050 observations) and reduced my sample to contain only one year so I don't need to do a multiple group LCA. The only thing I can think of to make it work, is to draw a random sample out of the sample I already have. Or are there other possible ways? PS: I don't think the assumption of local independence is met but wanted to realx this assumption in a later model. |
|
|
Here is also my output file: INPUT INSTRUCTIONS Title: Stata2Mplus conversion for H:\newdata\Stephan\fLCA1.dta List of variables converted shown below unterg : kontrolle : ausbildung : skillisco : selfemp : capitalincome : ygroup : lcafw : Frequency Data: File is H:\newdata\Stephan\fLCA2.dat ; Variable: Names are unterg kontrolle ausbildung skillisco selfemp capitalincome lcafw; Missing are all (-9999) ; Analysis: Type = basic ; Title: 1. Try LCA with frequency weights VARIABLE: FREQWEIGHT IS lcafw; USEVARIABLES = unterg kontrolle ausbildung skillisco selfemp capitalincome; CLASSES = c (1); CATEGORICAL = unterg kontrolle ausbildung skillisco selfemp capitalincome; ANALYSIS: TYPE = MIXTURE; Output: TECH1 TECH8 TECH10; *** FATAL ERROR THERE IS NOT ENOUGH MEMORY SPACE TO RUN Mplus ON THE CURRENT INPUT FILE. |
|
|
Please send the output with the error message and your license number to support@statmodel.com. Please keep posts to one window in length. |
|
|
Hi, we’re trying run a multilevel CFA model with weights with 4 latent factors and one higher-order latent factor. We have about 3500 observations in the model. We’re currently using WSLMV as the estimator but we’d like to run an alternative model with FIML. In doing so we changed the estimator to MLR and got the following message: *** FATAL ERROR THERE IS NOT ENOUGH MEMORY SPACE TO RUN Mplus ON THE CURRENT INPUT FILE. THE ANALYSIS REQUIRES 6 DIMENSIONS OF INTEGRATION RESULTING IN A TOTAL OF 0.11391E+08 INTEGRATION POINTS. THIS MAY BE THE CAUSE OF THE MEMORY SHORTAGE. YOU CAN TRY TO REDUCE THE NUMBER OF DIMENSIONS OF INTEGRATION OR THE NUMBER OF INTEGRATION POINTS OR USE INTEGRATION=MONTECARLO WITH FEWER NUMBER OF INTEGRATION POINTS SUCH AS 500 OR 5000. We can’t reduce the number of dimensions because we’re replicating another study and we want to stay away from using Monte Carlo in order to produce consistent fit statistics, which are Chi-squares, CFI, TLI, and RMSEA. We tried reducing the number of integration by indicating integration=5 in a similar model but only returned AIC and BIC as the fit statistics. Is it computationally too demanding to do FIML with this type of MCFA or is there another way we could run the model? Thanks! |
|
|
This will be computationally demanding in ML and because you have categorical outcomes, no overall fit index is available (only raw data are sufficient statistics, not correlations). ML gives bivariate fit information in TECH10. ML with montecarlo integration using 5000 points can work. If it weren't for weights, Bayes is an alternative that has no problem with many dimensions and now in version 8.4, more Bayes fit indices are available. I assume you get 6 dimensions because you have 1 Between-level factor. |
|
Back to top |