I need to run a cross-validation of my analysis on a different sample. My original CFA was conducted on the sample for Company A; now I need to estimate this model's fit on the sample for Company B. In other words, I need to make two steps: 1) estimate the measurement model on sample 1; 2) cross-validate: estimate the fit of the same model run on the sample 2 with all parameters fixed to the ones estimated in step 1.
I do not see an easy way to do this in MPlus (estimate model parameters on one sample, then check this model against the data in the second sample). Can such cross-validation be done automatically? Thank you!
You can use the SVALUES option of the OUTPUT command to obtain input with starting values that are the ending values of an analysis, for example, sample 1. You can change the asterisks (*) to @ to fix the parameters to the values rather than to have them as starting values.
I have one more question after performing the analysis: why does it happen that fixing a parameter (particularly, covariances between latent constructs) increases the degrees of freedom of the model? The number of parameters estimated goes down, but degrees of freedom goes up. Is there any text (or intuitive explanation) that would explain this?
Any time one parameter is fixed the degrees of freedom will increase by one. The degrees of freedom are the number of free parameters in the H1 model minus the number of free parameters in the H0 model.