Message/Author 

minyuedong posted on Wednesday, December 07, 2005  2:14 am



I am doing a twolevel SEM by using MLR estimator. For the chisquare difference anlysis, I refer to the 4steps suggested by Bentler as shown in the web. I got higher DFs (+7) and higher adjusted chisquare values (+29.04)when compare model 2 (with restrictions) to model 1, so my question is: in order to check if the chisquare difference is significant, shall I refer to the chisquare distribution or something else? Thanks in advance 


You should refer to the chisquare distribution. 


I have carried out a series of SEM analyses with one dependent variable (latent) and a number of predictors (a combination of latent variables and observed). I am using the complex and cluster options. The output provides the MLR estimator. Can I use the SatorraBentler 4step procedure (steps 3 and 4) for testing differences between models in this case? I use the "model constraint" option and compare models by constraining parameters (one by one) to be equal to zero, comparing with the unconstrained model. Does this sound right? Is it possible to get negative Chisquare values? 


Yes, you can use the SatorraBentler 4steps. You can use MODEL CONSTRAINT for fixing parameters to zero but it might be easier to just do theses simple constraints in the model command using @0. Negative chisquare values are possible and have been discussed by Bentler in the literature. 


Dear Linda, I have read carefully all the dialogues on the SatorraBentler test for differences between nested models. There is however, one point that is still unclear to me. The T0 and T1 values that I use when I apply the formula, are these the chisquare values that I obtain when running the analysis with the MLR estimiation procedure, or do I have to multiply the chi square values with their respective scaling correction factors first? Best regards Leif 


Those are ML and yes you need to multiply MLR by the scaling correction factor to obtain ML. 


Hi, I ran a ChiSquare Difference Testing Using the Loglikelihood on two models, one where the residual variances for my latent growth model was set as invariant (nested model) and one where they are set at varying (comparison model). Nested Model: Loglikelihood value: 26148.950 No. of parameters: 55 Scaling Correction Factor: 1.1884 Comparison model: Loglikelihood value: 26130.583 No. of parameters: 61 Scaling Correction Factor: 1.3295 RESULTS: Test Scaling Correction Difference (CD):2.6229 ChiSquare Difference (TRd): 14.0050 Number of Parameters Difference: 6 I was wondering how to interpret this these results (i.e., how do i tell which model is the best to use)? Any help is appreciated, Sophie 


Since the pvalue is 0.03 (<0.05) you would typically reject the nested model. You can get the pvalue using a chisquare online calculator or the CHIDIST(14,6) function in excel. 


AH thank you Tohomir! 


Dear Drs. Muthen, I wonder if you could help me with a model fit and chi square diff test problem. I am running an SEM and trying to assess model fit. Since 2 of the 3 latent variables on my endogenous factor are categorical, Mplus will not produce RMSEA, CFI fit statistics for me. (As a test, when I made the 2 latent variables continuous, I was then able to produce RMSEA and CFI fit statistics, so I think that is the problem). I am aware that a second way to test model fit is to compare model fit with nested models, using difference testing using chisquare or loglikelihood. Since no chisquare statistic is produced, I thought I would use loglikelihood. However, I think because I am using multiple imputation, my output produces a loglikelihood value L0, but produces no scaling correction factor. (As a test, I ran it without multiple imputation data, and then a scaling correct factor was produced, so I think multiple imputation is the problem). I wonder if there are other ideas about how I can asses model fit? Here is my model syntax. Thank you so much for your help!! I sincerely appreciate it. CATEGORICAL ARE SelfRate4 Functimp4; CLUSTER = clus; ANALYSIS: ESTIMATOR = MLR; TYPE = complex; MODEL: f1 BY SelfRate4 Functimp4 CountD4; Zconstr3 ON consc1; f1 ON Zconstr3; f1 ON consc1; f1 ON T12pfc; ZConstr3 ON sex; f1 ON health1; MODEL INDIRECT: f1 ind Zconstr3 consc1; 


I assume that when you say " 2 of the 3 latent variables on my endogenous factor are categorical" you are referring to the factor indicators, not the factors themselves (that is, you are not talking about latent class variables). You can use WLSMV or Bayes to get overall fit measures. Chisquare difference testing with Multiple Imputation is not readily available. See also our Short Course Topic 9. 


Thank you Dr. Muthen! Yes I'm sorry about my incorrect wording in the last post, I was referring to the indicators! As a followup to my question on Wednesday, May 15, 2019  10:37 AM in this discussion thread, I have considered using WLSMV to get fit indices. However, in my endogenous latent variable "f1", 2 of my indicators are categorical but one is a count variable ("CountD4"). Since Mplus cannot do multiple imputation on count data but CountD4 has a positive skew, I decided to consider CountD4 to be "nonnormal continuous" so I could do multiple imputation. I also prefer this approach since I think it'll be easier to interpret the output. However, I know that WLSMV is not robust to nonnormal continuous data. I think one option may be for me to run the analysis with WLSMV in order to get the fit indices, and then as a "sensitivity analysis," rerun the analysis with MLR estimator to confirm whether the regression coefficients/p values are around the same. So far, they do seem pretty similar! I am curious if you think this is a good approach? Since the factor loadings/p values seem somewhat similar between WLSMV and MLR, it seems the results are somewhat similar. However, given that WLSMV is not robust to nonnormal continuous data, will the fit indices that are produced end up being very incorrect? Thank you so much! 


WLSMV is robust to nonnormality and you can get average fit indices across the imputations. Using estimator=wlsmv; parameterization=theta or estimator=MLR; link=probit produce models that are directly comparable, i.e., estimate the same model. Both methods would be suitable. 

Back to top 