Anonymous posted on Wednesday, May 18, 2005 - 4:00 pm
I want to do a path analysis for a standardized DV. So instead of a survey weight, my weight is porportional to the reciprocal of the variance of my DV. Can MPlus handle this?
bmuthen posted on Thursday, May 19, 2005 - 11:24 am
You can use the Mplus frequency weight option. There is one catch, however, in that Mplus currently requires integer frequency weights. You can get around this approximately by replacing your weight w with 1000*w and round to integers to capture the first 3 significant digits of the weights. You should then use the MLR estimator to get results that are not scale dependent.
Kevin Wang posted on Tuesday, October 27, 2009 - 2:09 pm
I am using version 5.2
When I use FREQWEIGHT, I receive this error message, while my computer system should exceed the required capacity. Is there any way to fix it?
*** FATAL ERROR THERE IS NOT ENOUGH MEMORY SPACE TO RUN Mplus ON THE CURRENT INPUT FILE. YOU CAN TRY TO FREE UP SOME MEMORY BY CLOSING OTHER APPLICATIONS THAT ARE CURRENTLY RUNNING. NOTE THAT THE MODEL MAY REQUIRE MORE MEMORY THAN ALLOWED BY THE OPERATING SYSTEM. REFER TO SYSTEM REQUIREMENTS AT www.statmodel.com FOR MORE INFORMATION ABOUT THIS LIMIT.
regarding your post from 5/19/2005, wouldn't this approach change the standard errors of the estimates given the 1000 fold increase in the sample size? The reason for asking is that one of my students is trying to incorporate frequency weights from a propensity score analysis with a consequent mediation analysis. The frequency weights from PS are non-integers, but Mplus does not allow for non-integer frequency weights.Thus we followed your advice and multiplied the weights by 1000 to recover 3 digits, but this increased the sample size a 1000 fold. Thank you for your thoughts.
Paul Norris posted on Tuesday, January 07, 2014 - 1:47 pm
Dear Linda, Bengt et al,
Sorry for reviving such an old thread. I'm following the idea of multiplying weights by 1000 and rounding to integers to create weights which can be used with freqweight.
I understand how this adjustment will affect the SEs of regression coefficients etc. However, can anyone comment on how the 1000 fold increase in sample size affects the identification of groups in an LCA model. Will it affect the "optimal" number of groups identified by ABIC? How about the diagnostics with Estimator=Bayes?
I have two questions concerning FREQWEIGHT based on this discussion, where I use w*1,000 as my FREQWEIGHT (because w is filled with fractions).
(1) Using ML estimation: can the resulting chi-square value simply be corrected by dividing by 1,000?
(2) Using MLR estimation: I get the error that "THE CHI-SQUARE COULD NOT BE COMPUTED. THE CORRECTION FACTOR IS NEGATIVE." Can you help me understand what this means? If it helps I have 70 cases with a w ranging from .167 to 1.000.
I hope someone can help me out. I have similar error message.
I'm running a path model with type=complex (students of different studies) but I get this error:
THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES MAY NOT BE TRUSTWORTHY FOR SOME PARAMETERS DUE TO A NON-POSITIVE DEFINITE FIRST-ORDER DERIVATIVE PRODUCT MATRIX. THIS MAY BE DUE TO THE STARTING VALUES BUT MAY ALSO BE AN INDICATION OF MODEL NONIDENTIFICATION. THE CONDITION NUMBER IS -0.687D-17. PROBLEM INVOLVING THE FOLLOWING PARAMETER: Parameter 10, AANTALEC ON HAVOMBO
THIS IS MOST LIKELY DUE TO HAVING MORE PARAMETERS THAN THE NUMBER OF CLUSTERS MINUS THE NUMBER OF STRATA WITH MORE THAN ONE CLUSTER.
THE CHI-SQUARE COULD NOT BE COMPUTED. THE CORRECTION FACTOR IS NEGATIVE.
When running the same analysis without type=complex, I get similar results, but no error. Can I somehow repair this error? Or is it not possible to account for the fact the students are from different studies?
In addition to my question above, I am wondering whether I should use type=complex or not. I read somewhere here that you need at least 20 clusters? I only have 10. What can I do? Include dummy's for these studies?
I also did path models with type=complex and MLSV, the dependent variable was binary, and then I did not get any error message. Can I trust these results? Or is it better to leave out the type=complex here too, because of the small amount of clusters?