minyuedong posted on Wednesday, December 07, 2005 - 2:14 am
I am doing a two-level SEM by using MLR estimator. For the chi-square difference anlysis, I refer to the 4-steps suggested by Bentler as shown in the web. I got higher DFs (+7) and higher adjusted chi-square values (+29.04)when compare model 2 (with restrictions) to model 1, so my question is: in order to check if the chi-square difference is significant, shall I refer to the chi-square distribution or something else?
I have carried out a series of SEM analyses with one dependent variable (latent) and a number of predictors (a combination of latent variables and observed). I am using the complex and cluster options. The output provides the MLR estimator.
Can I use the Satorra-Bentler 4-step procedure (steps 3 and 4) for testing differences between models in this case?
I use the "model constraint" option and compare models by constraining parameters (one by one) to be equal to zero, comparing with the unconstrained model. Does this sound right?
I have read carefully all the dialogues on the Satorra-Bentler test for differences between nested models. There is however, one point that is still unclear to me.
The T0 and T1 values that I use when I apply the formula, are these the chi-square values that I obtain when running the analysis with the MLR estimiation procedure, or do I have to multiply the chi square values with their respective scaling correction factors first?
I ran a Chi-Square Difference Testing Using the Loglikelihood on two models, one where the residual variances for my latent growth model was set as invariant (nested model) and one where they are set at varying (comparison model).
Dear Drs. Muthen, I wonder if you could help me with a model fit and chi square diff test problem. I am running an SEM and trying to assess model fit.
Since 2 of the 3 latent variables on my endogenous factor are categorical, Mplus will not produce RMSEA, CFI fit statistics for me. (As a test, when I made the 2 latent variables continuous, I was then able to produce RMSEA and CFI fit statistics, so I think that is the problem).
I am aware that a second way to test model fit is to compare model fit with nested models, using difference testing using chi-square or loglikelihood. Since no chi-square statistic is produced, I thought I would use loglikelihood.
However, I think because I am using multiple imputation, my output produces a loglikelihood value L0, but produces no scaling correction factor. (As a test, I ran it without multiple imputation data, and then a scaling correct factor was produced, so I think multiple imputation is the problem).
I wonder if there are other ideas about how I can asses model fit? Here is my model syntax. Thank you so much for your help!! I sincerely appreciate it.
CATEGORICAL ARE SelfRate4 Functimp4; CLUSTER = clus; ANALYSIS: ESTIMATOR = MLR; TYPE = complex;
MODEL: f1 BY SelfRate4 Functimp4 CountD4; Zconstr3 ON consc1; f1 ON Zconstr3; f1 ON consc1; f1 ON T12pfc; ZConstr3 ON sex; f1 ON health1; MODEL INDIRECT: f1 ind Zconstr3 consc1;
Thank you Dr. Muthen! Yes I'm sorry about my incorrect wording in the last post, I was referring to the indicators! As a follow-up to my question on Wednesday, May 15, 2019 - 10:37 AM in this discussion thread, I have considered using WLSMV to get fit indices. However, in my endogenous latent variable "f1", 2 of my indicators are categorical but one is a count variable ("CountD4"). Since Mplus cannot do multiple imputation on count data but CountD4 has a positive skew, I decided to consider CountD4 to be "non-normal continuous" so I could do multiple imputation. I also prefer this approach since I think it'll be easier to interpret the output. However, I know that WLSMV is not robust to non-normal continuous data.
I think one option may be for me to run the analysis with WLSMV in order to get the fit indices, and then as a "sensitivity analysis," re-run the analysis with MLR estimator to confirm whether the regression coefficients/p values are around the same. So far, they do seem pretty similar!
I am curious if you think this is a good approach? Since the factor loadings/p values seem somewhat similar between WLSMV and MLR, it seems the results are somewhat similar. However, given that WLSMV is not robust to non-normal continuous data, will the fit indices that are produced end up being very incorrect?
WLSMV is robust to non-normality and you can get average fit indices across the imputations. Using estimator=wlsmv; parameterization=theta or estimator=MLR; link=probit produce models that are directly comparable, i.e., estimate the same model. Both methods would be suitable.