Anonymous posted on Wednesday, May 18, 2005 - 4:00 pm
I want to do a path analysis for a standardized DV. So instead of a survey weight, my weight is porportional to the reciprocal of the variance of my DV. Can MPlus handle this?
bmuthen posted on Thursday, May 19, 2005 - 11:24 am
You can use the Mplus frequency weight option. There is one catch, however, in that Mplus currently requires integer frequency weights. You can get around this approximately by replacing your weight w with 1000*w and round to integers to capture the first 3 significant digits of the weights. You should then use the MLR estimator to get results that are not scale dependent.
Kevin Wang posted on Tuesday, October 27, 2009 - 2:09 pm
I am using version 5.2
When I use FREQWEIGHT, I receive this error message, while my computer system should exceed the required capacity. Is there any way to fix it?
*** FATAL ERROR THERE IS NOT ENOUGH MEMORY SPACE TO RUN Mplus ON THE CURRENT INPUT FILE. YOU CAN TRY TO FREE UP SOME MEMORY BY CLOSING OTHER APPLICATIONS THAT ARE CURRENTLY RUNNING. NOTE THAT THE MODEL MAY REQUIRE MORE MEMORY THAN ALLOWED BY THE OPERATING SYSTEM. REFER TO SYSTEM REQUIREMENTS AT www.statmodel.com FOR MORE INFORMATION ABOUT THIS LIMIT.
regarding your post from 5/19/2005, wouldn't this approach change the standard errors of the estimates given the 1000 fold increase in the sample size? The reason for asking is that one of my students is trying to incorporate frequency weights from a propensity score analysis with a consequent mediation analysis. The frequency weights from PS are non-integers, but Mplus does not allow for non-integer frequency weights.Thus we followed your advice and multiplied the weights by 1000 to recover 3 digits, but this increased the sample size a 1000 fold. Thank you for your thoughts.
Paul Norris posted on Tuesday, January 07, 2014 - 1:47 pm
Dear Linda, Bengt et al,
Sorry for reviving such an old thread. I'm following the idea of multiplying weights by 1000 and rounding to integers to create weights which can be used with freqweight.
I understand how this adjustment will affect the SEs of regression coefficients etc. However, can anyone comment on how the 1000 fold increase in sample size affects the identification of groups in an LCA model. Will it affect the "optimal" number of groups identified by ABIC? How about the diagnostics with Estimator=Bayes?
I have two questions concerning FREQWEIGHT based on this discussion, where I use w*1,000 as my FREQWEIGHT (because w is filled with fractions).
(1) Using ML estimation: can the resulting chi-square value simply be corrected by dividing by 1,000?
(2) Using MLR estimation: I get the error that "THE CHI-SQUARE COULD NOT BE COMPUTED. THE CORRECTION FACTOR IS NEGATIVE." Can you help me understand what this means? If it helps I have 70 cases with a w ranging from .167 to 1.000.