Message/Author 

Anonymous posted on Thursday, March 25, 2004  10:32 am



Is it possible to acocunt for both survey weights and clustering in structural equation models using MPLUS when one or more variables are categorical? 


Yes, in Version 3. 

Anonymous posted on Wednesday, May 18, 2005  4:00 pm



I want to do a path analysis for a standardized DV. So instead of a survey weight, my weight is porportional to the reciprocal of the variance of my DV. Can MPlus handle this? 

bmuthen posted on Thursday, May 19, 2005  11:24 am



You can use the Mplus frequency weight option. There is one catch, however, in that Mplus currently requires integer frequency weights. You can get around this approximately by replacing your weight w with 1000*w and round to integers to capture the first 3 significant digits of the weights. You should then use the MLR estimator to get results that are not scale dependent. 

Kevin Wang posted on Tuesday, October 27, 2009  2:09 pm



I am using version 5.2 When I use FREQWEIGHT, I receive this error message, while my computer system should exceed the required capacity. Is there any way to fix it? *** FATAL ERROR THERE IS NOT ENOUGH MEMORY SPACE TO RUN Mplus ON THE CURRENT INPUT FILE. YOU CAN TRY TO FREE UP SOME MEMORY BY CLOSING OTHER APPLICATIONS THAT ARE CURRENTLY RUNNING. NOTE THAT THE MODEL MAY REQUIRE MORE MEMORY THAN ALLOWED BY THE OPERATING SYSTEM. REFER TO SYSTEM REQUIREMENTS AT www.statmodel.com FOR MORE INFORMATION ABOUT THIS LIMIT. 


I don't think this has to do with FREQWEIGHT. Please send your input, data, and license number to support@statmodel.com. 


Hi Bengt, regarding your post from 5/19/2005, wouldn't this approach change the standard errors of the estimates given the 1000 fold increase in the sample size? The reason for asking is that one of my students is trying to incorporate frequency weights from a propensity score analysis with a consequent mediation analysis. The frequency weights from PS are nonintegers, but Mplus does not allow for noninteger frequency weights.Thus we followed your advice and multiplied the weights by 1000 to recover 3 digits, but this increased the sample size a 1000 fold. Thank you for your thoughts. Best, Hanno 


You can scale up the SEs at the end using the sqrt of the sample size ratio. 


Hi Bengt, this worked out nicely. Thanks. Hanno 

Paul Norris posted on Tuesday, January 07, 2014  1:47 pm



Dear Linda, Bengt et al, Sorry for reviving such an old thread. I'm following the idea of multiplying weights by 1000 and rounding to integers to create weights which can be used with freqweight. I understand how this adjustment will affect the SEs of regression coefficients etc. However, can anyone comment on how the 1000 fold increase in sample size affects the identification of groups in an LCA model. Will it affect the "optimal" number of groups identified by ABIC? How about the diagnostics with Estimator=Bayes? Many thanks in advance, Paul 


Seems like you can scale down the 2 parts of BIC by 1000 to get the usual BIC: the loglikelihood and the BIC penalty part. Don't know about Bayes PPP. 


Dear MuthenSquared, I have two questions concerning FREQWEIGHT based on this discussion, where I use w*1,000 as my FREQWEIGHT (because w is filled with fractions). (1) Using ML estimation: can the resulting chisquare value simply be corrected by dividing by 1,000? (2) Using MLR estimation: I get the error that "THE CHISQUARE COULD NOT BE COMPUTED. THE CORRECTION FACTOR IS NEGATIVE." Can you help me understand what this means? If it helps I have 70 cases with a w ranging from .167 to 1.000. Many thanks! 


1) yes 2) see https://www.statmodel.com/examples/webnotes/webnote12.pdf 


Hi, I hope someone can help me out. I have similar error message. I'm running a path model with type=complex (students of different studies) but I get this error: THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES MAY NOT BE TRUSTWORTHY FOR SOME PARAMETERS DUE TO A NONPOSITIVE DEFINITE FIRSTORDER DERIVATIVE PRODUCT MATRIX. THIS MAY BE DUE TO THE STARTING VALUES BUT MAY ALSO BE AN INDICATION OF MODEL NONIDENTIFICATION. THE CONDITION NUMBER IS 0.687D17. PROBLEM INVOLVING THE FOLLOWING PARAMETER: Parameter 10, AANTALEC ON HAVOMBO THIS IS MOST LIKELY DUE TO HAVING MORE PARAMETERS THAN THE NUMBER OF CLUSTERS MINUS THE NUMBER OF STRATA WITH MORE THAN ONE CLUSTER. THE CHISQUARE COULD NOT BE COMPUTED. THE CORRECTION FACTOR IS NEGATIVE. When running the same analysis without type=complex, I get similar results, but no error. Can I somehow repair this error? Or is it not possible to account for the fact the students are from different studies? Thank you in advance! Best, Miranda 


In addition to my question above, I am wondering whether I should use type=complex or not. I read somewhere here that you need at least 20 clusters? I only have 10. What can I do? Include dummy's for these studies? I also did path models with type=complex and MLSV, the dependent variable was binary, and then I did not get any error message. Can I trust these results? Or is it better to leave out the type=complex here too, because of the small amount of clusters? I hope you can help me! Best, Miranda 


That's a typo. It should be: WLSMV. 


We ask that postings be limited to one window and should otherwise be send to Support. Please send your output and questions to Support along with your license number. 

Back to top 