There can be a difference but it should not be large. Please send the output and your license number to firstname.lastname@example.org.
Anonymous posted on Wednesday, December 01, 2010 - 11:44 am
I am also having this problem in a CFA that I am running. It seemed that this issue happened with the same model that I've run in Mplus versions 5.1, 6.0, and 6.1, in which some loadings are non-significant in the unstandardized results, but are significant (or marginal) in the fully standardized (STDYX) results.
Please let me know if you'd also like me to send the output and license number to support. Thank you in advance!
Raw and standardized results will not always show the same significance.
Anonymous posted on Friday, December 03, 2010 - 10:52 am
Thanks for your reply Linda! I'm writing up the results of the CFA for publication. Do you have a suggestion for which results to use for reporting as well as to draw the figure for the CFA (i.e., the correlations between the factors and the loadings are not always consistent between the two results)?
You should use what others in your field and in the journal you are writing for use.
Anonymous posted on Monday, December 06, 2010 - 12:49 pm
Linda, I just want to make sure that the raw and standardized results should vary so much. For example, on the loadings I have p-values that are .12 vs. .001 (raw vs. stdyx) and on the correlations between the factors I have p-values that are .18 vs. .002 and .04 vs. .11 (raw vs. stdyx). So, from these examples you can see that the difference is often extreme, and often the stdyx result is significant, but not always. I just want to double check that it is not strange for the results to be this different, and also if you can give some explanation or resource for why this difference occurs.
I really can't say much without seeing the outputs where you see the differences. Otherwise I am just guessing. You can send them to support along with your license number if you want. The two coefficients have different sampling distributions.
I posted too soon. Thanks, Tihomir, but in the context of this conversation I'm looking specifically to be able to use something analogous to percentile bootstrap confidence intervals for quantities with asymmetric sampling distributions. If I read correctly, these are methods for generating bootstrapped standard errors (and therefore symmetric CIs).
To get the bootstrap confidence interval (the asymmetric version) use output:cint(boot);
Incidentally the type=complex bootstrap has two versions to it
1. when you have (bootstrap, BRR, Jackknife ... ) replicate weights
2. Mplus generates internally the replicate weights according to the information given (weights clusters and strata) and the desired resampling method (bootstrap, BRR, Jackknife ... ) - so when you don't have replicate weights you use this method
On thinking on this, would it be possible to change the error message that results from using BOOTSTRAP with TYPE COMPLEX? Something like, "You may be able to use replicate weights for this problem" appearing rather than just a rejection could have saved me years of frustration!
The BOOTSTRAP option with weights is only allowed with TYPE=COMPLEX when replicate weights are present or when REPSE=BOOTSTRAP is requested.
The BOOTSTRAP option with TYPE=COMPLEX and REPSE=BOOTSTRAP requires a weight variable. Specify a weight variable using the WEIGHT option in the VARIABLE command. If weights are not present, create a new variable (weight) for the WEIGHT option. Assign weight=1 in the DEFINE command.