I wanted to show my class how mplus does FIML by default, but it appears that thus isn't the case for a linear regression with a single, observed, continuous outcome. I have a demo data set with 20 cases. The data are complete for the predictors and half are missing for Y. If I simply regress Y on X1 and X2, it drops the cases with missing data on Y, even though I have not specificed "LISTWISE=ON" in the data step. The estimates are the same as what I get in SPSS using listwise deletion. Am I correct then, that FIML is not the default in regression with a single DV? That is question #1.
Now for question #2. I know I can force Mplus to use all the cases by naming the variances of X1 and X2. But when I do so, I get the exact same unstandardized estimates and SEs as in the listwise deleted model. Why would that be? I thought that naming the variances would trigger FIML and that FIML would produce less biased (i.e., different) unstandardized estimates.
I have replicated this phenomenon in a larger data set, where I know for sure that the probability of missing data on Y is conditioned on one of the X variables.
I am considering how to address missing data for plain multiple linear regression (i.e., 1 DV measured as an observed variable). I have missing data on both the DV and predictors.
1. If you include auxiliary variables in this case, does this add to the model estimation? Saturated correlates examples often focus on latent variable models.
2. For this model, what would be the FIML equivalent in Mplus of completing multiple imputation on IVs and DV while including extra auxiliary covariates during the MI process? It appears from the answer to the prior question that naming the variances of Xs does not affect the slopes if Y is missing.