Message/Author 

Sanjoy posted on Tuesday, May 10, 2005  6:49 pm



Dear Professor/s … from earlier MPlus discussions I realize that MLR in effect stands for ML(Full Information) with HuberWhite covariance adjustment, which gives us robustness in presence of Nonnormality and nonindependence of observation … also on page 366 MPlus User’s Guide, u are suggesting MLR as an alternative for WLSMV … I have couple of quick questions in this regard Q1. My dependent (indicator) variables are categorical, hence nonnormal … but ARE THEY SAME in the sense nonnormality is being handled in “Sandwich” estimator a.k.a. HuberWhite Q2. Can we use MLR in SEM where we have both measurement model (on multiple categorical indicator) and structural equation system inclusive of covariates (X’s). Q3. If yes, then can u please suggest me some reference which is kind of counterpart of your (83,84,95,97) articles Q4. Again (iff we have an yes to Q2.), …now FIML makes difference from Limited information ML (LIML) only when we are estimation some system of equations, I mean at least more than oneequation system (say e.g. ours is a three equation system with endogeneity) … so what numerical method is being used in MPlus when it runs FIML not LIML in order to solve highorder integral Q5. Now if my understanding about your WLSMV is correct … then it starts with 2 stage least square approach (which is LIML) and later adjusts the covariance matrices using appropriate weight matrix (ur 97 paper which upgraded ur earlier WLS estimator to a more robust WLSMV) … and doing so, we simply circumvent the computational burden of FIML (which sometimes become infeasible) and yet at the same time get a mean –covariance adjusted roust coefficients … then what is the real need for MLR, are we still missing anything substantial in weighted approach Below is my model R by R1R3; B by B1B3; Y on R B X1; R on B X2; B on R X3; R1R3 and B1B3 are 5point categorical, Y is in 1/0 …X’s are the covariate and they do share some common elements Thanks and regards 

bmuthen posted on Wednesday, May 11, 2005  8:23 am



Q1. No, declaring your outcomes as categorical leads to nonlinear models. The nonnormality adjustments by HuberWhite refers to treating the outcomes as continuous. Q2. Yes. Q3. See Hu & Bentler in a fairly recent Soc Meth article. Q4. If the ML estimation requires numerical integration, Mplus offers 3 methods with variations such as adaptive quadrature or not, and Cholesky decomposition (see User's Guide). Q5. WLSMV is not as efficient as ML, although the loss seems small. ML handles MAR whereas WLSMV cannot given its pairwise variable orientation. 

Sanjoy posted on Wednesday, May 11, 2005  2:33 pm



Thank you Prof ...I couldn't find any "Hu & Bentler" article on SEM with Categorical (ordinal) indicator outcome variable ... as per your suggestion, I have looked for it, made a search through “Google”, except their 1999 article on Model Fit index (SEM: AMJ, Vol6 (1), 155), which I have already requested from InterLibrary loan, I couldn't find any And in "Sociological methodology" I could not find any article written by them in particular, instead I found three articles by Prof. Bentler coauthored with some other folks 1.“Assessing the Effect of Model Misspecifications on Parameter Estimates in Structural Equation Models” KeHai Yuan, Linda L. Marshall and Peter M. Bentler, Sociological Methodology, Volume 33, Issue 1, Page 241265, January 2003 2. “Three LikelihoodBased Methods for Mean and Covariance Structure Analysis with Nonnormal Missing Data” KeHai Yuan; Peter M. Bentler, Sociological Methodology, Vol. 30 (2000), pp. 165200 3. “Structural Equation Modeling with Robust Covariances” KeHai Yuan; Peter M. Bentler, Sociological Methodology , Vol. 28 (1998), pp. 363396 and in “Sociological methods & research.” I couldn’t find any article even written by Dr. Bentler (assuming our Library online Journal search engine works ok) Could you please mention the name of the article? Using your WLSMV, I got the satisfactory result, however I want to use a ML(Full Information) if at all there is some established statistical theory (like the yours one for WLSMV) which can handle SEM with categorical(ordinal) indicator outcomes along with covariates and I hope, in that case MPlus is able to handle that theory … since all I have is MPlus and one month of time in my hand Thanks and regards 

bmuthen posted on Wednesday, May 11, 2005  4:31 pm



I should have said Yuan & Bentler (2000). Please also see references in Mplus Web Note #2. There is nothing written on ordinal outcomes, only nonnormal continuous outcomes. ML can be quite time consuming if you need many dimensions of integration. 

Sanjoy posted on Wednesday, May 11, 2005  8:05 pm



yes professor, I'm going through their 2000 article ... practically speaking apart from your and Prof. Arminger(JASA,92 and his chapter in 95 Handbook) I haven't seen any statistical articles comprehensively dealing with ordinal indicator variables in SEM framework, if I'm not wrong "GLLAMM" can't do that either, I mean not in scenario where latent factors being regressed on other latent factors along with covariates ... following ur advice I started reading Little's book on missing data(2002 ed.), well if I got them correctly then their chapters on categorical data is primarily concerned about categorical (nominal) rather than categorical(ordinal)one ...I haven't finished their book yet, however this is my first impression thanks and regards 

Bpnnie posted on Friday, December 16, 2005  12:57 pm



Hi, In my model, the mediator is categorical while the outcome is continuous. I learned that WLSMV estimator would use probit regression results and ML would report logistic regression results. Which one is better? The default one or logistic results? Would appreciate it ! Bonnie 


Maximum likelihood is a more efficient estimator than weighted least squares. If you can use it to estimate your model, then I would. 


Hello, I have a crosslagged panel design and I am using Mplus to test the following model: VARIABLE:NAMES ARE clus g u2u5 x1x5; CATEGORICAL IS u3u5; CLUSTER IS clus; USEVARIABLES u2u5 x2x5; ANALYSIS: TYPE = COMPLEX; PARAMETERIZATION = THETA; ITERATIONS = 2000; MODEL: x5 ON x4 x3 x2 u5; x4 ON x3 x2 u4; x3 ON x2 u3; x2 ON u2; u5 ON u4 x4; u4 ON u3 x3; u3 ON u2 x2; OUTPUT: modindices standardized; SAVEDATA: DIFFTEST = deriv.dat; Mplus automatically chooses the WLSMV estimator. I recently received a comment of a Reviewer to support the use of this estimator. Can you direct me to a reference or an article where the appropiateness of the use of this estimator is explained (in my case/ in this model)? He/she refers to Firth (1992) and McCullagh (1992) who question the use of a sandwich estimator in adjusting covariance matrices. The same reviewer also asks me to give a rationale for the reporting of the fit indices CFI, TLI and RMSEA. I know that these are (some of) the standard fit indices that Mplus produces but is there a rationale behind the choice of these fit indices above others and if so, can you direct me to a reference where this is explained? Thank you very much! 


The following paper which is available on Bengt Muthen's UCLA website studies the WLSMV estimator: Muthén, B., du Toit, S.H.C. & Spisic, D. (1997). Robust inference using weighted least squares and quadratic estimating equations in latent variable modeling with categorical and continuous outcomes. Accepted for publication in Psychometrika. The sandwich estimator is a commonly accepted approach which is widely used. See Hu & Bentler several years ago in Psycho Methods regarding a variety of fit statistics. See the Yu dissertation on this website for a study of fit statistic behavior for WLSMV. 


I have SEM output with ordinal indicators thus WLSMV estimates. I need to learn how to interpret and report the estimates and how missing data is handled. Will you please point me to appropriate references. Thank you. 


With WLSMV, probit regressions are estimated. One good reference is one of the Agresti books on categorical data analysis. With WLSMV and no covariates, pairwise present is used. This means that each correlation is estimated using all available data. The Little and Rubin book may cover this topic. 


Thank you, Linda. There are covariates in my model. What does this mean with regard to missing. 


Dear professors, In a paper we use the WLSMV estimator because we have a continuous mediator and categorical dependent. I have 2 questions concerning the estimator. 1. I have difficulty finding out whether WLSMV automatically executes the HuberWhite correction or whether I should do this manually by providing a weight in the syntax. Could you help me on this? 2. One of the reviewers wants some more information on the estimator and I am looking for a good reference on this. I tried to find the reference mentioned above by Muthen et al. in Psychometrika, but I can't find in anywhere. Do you have (another) suggestion for a good reference describing the ins and outs of WLSMV? Many thanks, Serge Rijsdijk 


You can also use maximum likelihood estimation in this case. 1. The standard errors are like HuberWhite. MLR provides HuberWhite standard errors. With WLSMV you do not need to provide a weight. 2. The Muthen et al paper is on the website under Papers. See also Muthén, B. & Satorra, A. (1995). Technical aspects of Muthén's LISCOMP approach to estimation of latent variable relations with a comprehensive measurement model. Psychometrika, 60, 489503. 


Dear Linda, I am testing a mediation model with dichotomous dependent variable. As I prefer logistic (to probit) regression I took the MLR estimator instead of WLSMV. As usual, I wish to first calculate a measurement/correlational model and then the mediation models. Unfortunately, MLR does not seem to work for this first model as I get error messages: *** ERROR in MODEL command Variances for categorical outcomes can only be specified using PARAMETERIZATION=THETA with estimators WLS, WLSM, or WLSMV. (...) Covariances for categorical variables with other variables are not defined. Am I doing something wrong or does this mean that I should calculate the measurement model with WLSMV and the mediation models with MLR? Would it make sense to use these two different estimators for testing the same model (in the sense of same variables)? Thank you, Claudia 


Variances of categorical variables are not parameters in crosssectional models using weighted least squares estimation or maximum likelihood. You should remove specifications of variances for categorical outcomes from the MODEL command. With weighted least squares estimation and the Theta parametrization, variances can be estimated for multiple group and growth models. 

Averdijk posted on Tuesday, August 23, 2011  1:36 am



Dear Dr. Muthen, I estimated an MSEM model with all binary observed variables (1(11)1 design; few missing values). I ran the model both using MLR and WLSMV, but get very different results in terms of significance. I realize that MLR uses logit and WLSMV probit, but I expected the significance levels to be similar. When I rerun the direct effects in Stata (xtlogit, fe), these results are much more similar to the MLR results than to WLSMV. Please find my syntax below. Do you have any suggestions on why the MLR and WLSMV results differ, and whether one is more accurate than the other? Many thanks in advance. *MLR VARIABLE: names are key x1 x2 x3 x4 y; usevariables are x1 x3 y x4 x2; categorical = x1 x3 y x4 x2; missing are all (999); Cluster = key; Within are x1 x3 y x4 x2; ANALYSIS: type = twolevel; estimator = mlr; integration = montecarlo; MODEL: %within% x3 on x1(a1); x4 on x1(a2); y on x3(b1); y on x4(b2); y on x2; x1 on x2; y on x1; MODEL CONSTRAINT: NEW(indw1 indw2); indw1=a1*b1; indw2=a2*b2; *WLSMV Same as MLR, except: estimator = wlsmv; 


Other than scale differences, the results should be close. Please send the two outputs and your license number to support@statmodel.com. 


Thanks for sending the outputs. You have missing data. The way missing data are handled is different between WLSMV and MLR. I would use MLR. 

Dan Sass posted on Monday, September 19, 2011  8:37 am



Hello, I am aware that it is inappropriate to evaluate the change in approximate fit indices (ÄAFI) when doing invariance testing with WLSMV, whereas this is not a problem with ML. Is it appropriate to evaluate the ÄAFI with MLR? Thanks! 


With MLR you must use the scaling correction factor that is provided. See Chisquare Difference Test for MLM and MLR on the website. 

Dan Sass posted on Monday, September 19, 2011  11:23 am



I understand that is the case for the change in chisquare, but what about the change in CFI, TLI, RMSEA, and SRMR? 


I know of no theory related to using the above fit statistics for difference testing. 

Dan Sass posted on Tuesday, September 20, 2011  3:07 pm



Perhaps my question was poorly worded. Based on my understanding, it is statistically inappropriate to evaluate the change in approximate fit indices (e.g., change in CFI = CFI for the measurement invariant model minus the CFI for the configural invariant model) because WLSMV does not allow for a direct comparison between models due to the adjusted chisquares. Consequently, more emphases should be placed on the change in chisquare using the DIFFTEST procedure than proposed model fit criteria (e.g., Chen, 2007; Cheung & Rensvold, 2002; Meade et al., 2008) using the change in approximate fit indices. My question is whether the same logic applies to MLR estimation because of the scaling factors, which are likely different for the measurement invariant and configural invariant models. I do not think the scaling factors should influence the approximate fit indices (thus change in approximate fit indices), but I wanted to make sure. Thanks for your time!! 


I don't know of a theory for comparing fit indices such as CFI even in the case of ML for continuous normal variables. So I don't think it is a matter of DIFFTEST or MLR scaling factors. MLR scaling factors do play into the computation of CFI because CFI is based on the MLR chisquare which is in turn is affected by the MLR scaling factor. 


Dear Professors, I'm interested in estimating a twolevel structural equation model with Mplus using the MLR estimator. So I would like to read something about this estimator. I found the following bibliography: Asparouhov & Muthén (2003a). Fullinformation maximumlikelihood estimation of general twolevel latent variable models. Manuscript in preparation. Asparouhov, T. & Muthén, B. (2003b). Maximumlikelihood estimation in general latent variable modeling. Manuscript in preparation. Please, can you indicate me where I can find them? Or where I can find something similar? Thanks 


I would recommend looking at Technical Appendix 8 on the website. The MLR estimator is discussed here. See formula 170. Other estimators are described in Technical Appendix 4. 


Dear Professor/s, My dependent (indicator) variables are latent (composed of categorical variables). When I run the model, Mplus automatically chooses WLMSV. I can not choose ML since I have more than 4 factors and I have a lot of missing variables. Hence,I was wondering 1) whether I should choose MLMV or WLSMV? 2) If I choose MLMV (I need to do listwise deletion). I was wondering if I choose WLSMV and do not do listwise deletion, would it just use all the data available? and will that cause a problem?? 3) Lastly, if I choose WLSMV will all the regressions be probit coefficients or since the outcome variable is a latent variable will it be linear regression coefficient? Kind regards. 


What do you mean by your DVs being latent? Do you mean that you have a factor model for categorical items and the factors are DVs? And you have missing on those categorical factor indicators? 


I am very sorry to confuse you. I will try to explain it with an example. What I mean is my outcome variable is a latent variable (composed of categorical items). i.e. Use variables are x81 x115 x97 x31 x32 x33 y1; Categorical are x81 x115 x97 x31 x32 x33; Model: aliena by x81 x115 x97; alienb by x31 x32 x33; aliena alienb on y1; alienb on aliena; And I have missing values from the items (i.e. in x81 I have 32 missing, in y1 I have 100 missing etc.). Hence when I do listwise deletion I lose a lot of data (from 13530 to 4000). So firstly, I was wondering which estimator I should be choosing. And I was thinking it would be WLSMV but I am not 100% sure. Secondly, I was wondering if I do not put listwise = deletion will it cause a problem? Lastly,I was wondering if I choose WLSMV will all the regressions be probit coefficients or since the outcome variable is a latent variable will it be linear regression coefficient? Kindest regards. 


With a lot of missing data, I would use MLR. I would not use listwise deletion. You have only two factors so this will require two dimensions of intergration which should be okay. The regression of the factors on the covariates are linear regressions. 


The best estimator in the presence of missing data for you would be the ML estimator. In your model there are only 2 factors so I am not sure how you see more than 4 factors. You should be able to obtain the ML estimates by specifying the appropriate type of numerical integration: If you have 13 factors you can use the default integration=15 (no need of a special command). If you have 4 factors you can use integration=monte(5000) or integration=10. For 5 or more factors use integration=monte(5000). Alternatively you can use multiple imputation routines available in Mplus or the Bayes estimation. See Section 3 in http://statmodel.com/download/BayesAdvantages18.pdf You should add to the model "y1;" which essentially changes the y1 variable into a dependent variable and thus when y1 is missing the data will not be deleted. I would not recommend using listwise deletion to deal with the missing data. 


Dear Prof. Linda, thanks a lot for your quick reply. I gave that as an example. In my real model I have 5 factors. That is why I was thinking I wouldn't be able to use MLR hence was thinking to use WLSMV. Do you think that is appropriate? and is it aright to not use listwise deletion in WLSMV? And lastly,in WLSMV the regression of the factors (even they are made up of categorical items)are linear, am I correct? Thanks a lot for your help, Kindest regards. 


Yes, you don't have to use listwise deletion for WLSMV. The sample statistics of WLSMV uses pairwise present data. And yes, the factor regressions are linear because the DV is continuous. 


I am conducting a small empirical analysis comparing CFA and Bayesian (via BUGS) approaches to estimating unidimensional IRT models. Given the relatively simple nature of my models (one dimension, 20 indicators, dichotomous scoring, sample sizes varying from 501,000), I am using an FIML w/ standard integration approach to estimation. As my data is simulated, I don't have any missing values. I know from previous experience and from reading through these forums the the ML approach is more efficient than the WLSMV approach, and is preferable under conditions of limited dimensionality. My questions is whether or not someone can point me to some empirical work that supports this so as I can read/verify/cite. Thank you in advance! 


We have a paper discussing the performance of Bayes for IRT with binary items on this web site, Asparouhov, T. & Muthén, B. (2010). Bayesian analysis of latent variable models using Mplus. Technical Report. Version 4. There is also the leastsquares vs ML article Forero & MaydeuOlivares (2009). Estimation of IRT graded response models: Limited versus full information methods. Psych. Methods, 3, 275299. And then there is the classic article which shows that ML isn't that much more efficient than using tetrachoric correlations (WLMSV): Mislevy (1986). Recent developments in the factor analysis of categorical variables. Journal of Educational Statistics,. 11, 331. It is very easy in Mplus to do your own simulation study, comparing Bayes, ML, and WLSMV. 


Dear professors, I have a question with regards to mplus handling missing data. When I choose WLSMV and do not put "listwise deletion", how is the missing data being handled? Similar to maximum likelihood does wlsm uses all the avialble information?? Secondly, when I'm using WLSMV with missing data (and have 5 factors, and 1 observed variable), how do I know how many subjects are used in the model? And if not will be reporting the number of observations when writing a paper? Kindest Regards. 


Q1. See UG, pp. 78. Q2Q3. The maximum sample size is printed in the output  report that. WLSMV uses pairwise present data for estimating the sample correlations. 


Dear Tihomir Asparouhov, after your advice I tried to run my model as Usevariables are mv81 mv115 mv97 CV8 CV10 PSH PSS PSHo FAIC Act24 Adap24 Inten24 AnxT depT PSConf phy emo; Categorical are mv81 mv115 mv97 PSH PSS PSHo PSConf phy emo AnxT depT CV8 CV10; Analysis: integration = monte (5000); estimator = ml; Model: alient by Act24 Adap24 Inten24; alienp by PSS PSHo PSH; aliend by emo phy PSConf; alienm by depT AnxT; alienvic by mv97 mv81 mv115 CV8 CV10; alienp with alient; alienp with aliend; alient with aliend; alient on FAIC; alienp on FAIC; aliend on FAIC; alienp on alienm; alient on alienm; aliend on alienm; FAIC with alienm; alienvic on alienp; alienvic on alienm; alienvic on FAIC; alienvic on aliend; alienvic on alient; output: stand; It starts running the analysis but it is very very slow in the MSDOS screen and after 2 hours my computer crashes. Do you have any suggestions with regards to this?? Thanks a lo, Kind Regards. 


Please send the relevant files and your license number to support@statmodel.com. 


Hello, I was reading that with MLR estimation, Mplus by default uses 10 sets of random starting values, run through 20 iterations, to avoid local solutions in the likelihood function. I have a couple questions: 1) These multiple runs make standard errors smaller, right? 2) With ML estimation, this "10 random starting values" process is not done, right? So, this is unique to the MLR estimation? 3) The nomenclature "robust standard errors" for MLR estimation is curious to me because it is the beta estimates that become robust, right? The standard errors become smaller, and the beta estimates become more robust to multiple runs with different local solutions, and to nuances of specific samples from a given population. Is that right? Please do correct my thinking where I am wrong. Thanks! 


1. Random starts are not used with all analyses. They are used primarily with mixture modeling. They do not make the standard errors smaller. 2. Random starts are used with ML and MLR. 3. No, it is not the parameter estimates that are robust. It is the standard errors. Everything you say is incorrect. 

Bennie posted on Monday, June 18, 2012  2:22 pm



Hello, I am a new researcher and have some questions as I have become confused with what I have read on this site and on other sites. 1) Is the WLSMV estimator the best option in research at the moment for categorical variables when investigating a model? 2) If the WLSMV estimator is used, is it fine to report CFI, TLI and RMSEA? As I have read that the WRMR is still a bit unreliable? So in other words is keeping with the rule of thumb of three fit indices for a model acceptable? Thank you in advance for your replies. 


1. Both WLSMV and ML are good estimators for categorical variables. WLSMV is probit. The ML default is logistic but probit is also available. If you have a model with many factors with categorical indicators, WLSMV is less computationally demanding. Also, if you want to include residual covariances between categorical observed variables, WLSMV is less computationally demadning. ML has better missing data handling. 2. WRMR is an experimental test statistic. I would consider all of the other test statistics. 

Aylin posted on Wednesday, June 20, 2012  4:12 am



Dear Linda, In my path analysis (using WLSMV) if I include correlations between two categorical variables is this correlation a tetrachoric correlation? and if I look at the correlations between two continuous variables is it pearson r correlations?? Thank you. 


If you are looking at sample statistics, you will find either tetrachoric or polychoric corelations for categorical variables and Pearson correlations for continuous variables. If you are looking in the results section of the output, it will be the same for categorical but covariances for continuous variables. 

Aylin posted on Wednesday, June 20, 2012  10:55 am



I was referring to the results section of the output. I am not sure what do you mean by "it will be same for categorical". So will it be tetrachoric correlations?? thank you 

Aylin posted on Wednesday, June 20, 2012  12:17 pm



And another quick question, then what are the correlations (in the results section of the output) of latent variables with WLSMV estimator? Are they covariances and not Pearson R correlations? 


Yes. They are covariances. 

Aylin posted on Wednesday, June 20, 2012  1:58 pm



Thank you very much Linda. Is there a way to estimate the Pearson r correlations between latent variables? Or am I completely on the wrong page? 


Ask for TECH4 in the OUTPUT command. 

Jan Zirk posted on Friday, July 20, 2012  6:51 am



Dear Linda or Bengt, I have a mediation model with a group mediator (b). Model 1: c on b; b on a; c on a; Pearson correlation matrix shows strong negative correlation c with a and strong negative correlation/beta for c with b; MLR shows that 'c on a' in Model 1 is highly not sign. Bayesian estimation and WLSMV show that 'c on a' is significant and the estimate is unexpectedly positive. When the group mediator is taken out (Model 2: c on a), 'c on a' becomes negatively significant under MLR, WLSMV and Bayesian estimation. There can be some conceptual/phenomenological explanation of this paradoxically positive direction of the relationship in the mediatio model, but I wonder what is the possible common feature of Bayes and WLSMV which make them able to detect that in Model 1 paradoxically 'c on a' is highly positively sign while MLR & ML fail to do this? 


You mention that your mediator b is a "group mediator" by which I assume it is binary. ML estimation uses the observed binary mediator b as the predictor of c. In contrast, WLSMV uses the continuous latent response variable b* behind b as the predictor. The default for Bayes does the same as WLSMV, but Bayes can also use the observed mediator as ML does (using the mediator= option of the Analysis command). As argued in the paper below, the standard approach of estimating an indirect effect as a product of two coefficients is only appropriate using the b* approach. If b itself is the substantively motivated mediator you need to instead approach the indirect effect estimation as is done in the paper based on "causally defined" effects. Muthén, B. (2011). Applications of causally defined direct and indirect effects in mediation analysis using SEM in Mplus. This paper is on our web site under Papers, Mediational Modeling. 

Jan Zirk posted on Saturday, July 21, 2012  4:06 am



Dear Bengt, Thank you so much for your generous and precise help. Indeed, my "group mediator" is binary. Best wishes, Jan 

Jan Zirk posted on Saturday, July 21, 2012  4:24 am



"the standard approach of estimating an indirect effect as a product of two coefficients is only appropriate using the b* approach." Does this mean that the MLR coefficients are completely uninterpretable and the MLR information criteria can not be used as evidence for model preference over the alternative? 


The coefficients for a>b and from b>c are interpretable but the indirect effect cannot be said to be the product of those 2 coefficients. 

Jan Zirk posted on Saturday, July 21, 2012  9:16 am



Oh, I see; thank you very much for help. Jan 


Would you please help me in assessing model fit for an SEM model with categorical data using MLR? I am not familiar with MLR, and appreciate all guidance. The model is below, where outcome is binary and pain13 are 4category ordinal variables. I am using MLR because I am interested in estimating an odds ratio for outcome associated with the pain factor. VARIABLE: NAMES = id sex age outcome pain1 pain2 pain3; USEVAR = sex age outcome pain1 pain2 pain3; CATEGORICAL = outcome pain1 pain2 pain3; CLUSTER = id; MISSING=.; ANALYSIS: TYPE = COMPLEX; ESTIMATOR = MLR; MODEL: pain by pain1* pain2 pain3; pain@1; outcome ON pain sex age; For model fit, I got the following. How do I use this to assess if this model fits adequately to the data? Thanks much! MODEL FIT INFORMATION Number of Free Parameters 16 Loglikelihood H0 Value 1784.887 H0 Scaling Correction Factor 1.7027 for MLR Information Criteria Akaike (AIC) 3601.774 Bayesian (BIC) 3668.884 SampleSize Adjusted BIC 3618.100 (n* = (n + 2) / 24) 


This fit information is to be used for comparisons with other competing models. Your model states that you have no direct effects from the pain factor indicators to the outcome. So the model can be tested by adding one such direct effect at a time. For absolute fit to the data you can request TECH10 in the OUTPUT. You can also estimate this with WLSMV or Bayes to get fit statistics, although that would bring you to using probit, not logit. 

Anonymous posted on Friday, September 06, 2013  5:19 pm



Hello, I am running a mediational path model (no SEM) that has both continuous and categorical observed variables. I have a continuous variable that is skewed (it is not an outcome) and I am wondering what estimator would be best to use with this type of model. I had previously run the model using WLSMV because I understand that it will handle continuous and categorical outcomes, however I am unsure of whether WLSMV is robust to nonnormal data. If not, is it appropriate to use MLR with this model? 


WLSMV is not robust to nonnormal continuous variables. If your continuous variables are nonnormal, I would suggest using MLR which is robust to nonnormality and can also handle a combination of continuous and categorical observed variables. One issue with categorical variables and maximum likelihood estimation is that each factor with categorical factor indicators requires one dimension of integration and you don't want a model with too many dimensions of integration. 


Hello, I have a continuous dependent variable from complex survey data which exhibits kurtosis and so is nonnormal. I am running regressions to explore the impact of the kurtosis on the residuals. I have read Yuan & Bentler (2000) and Muthén & Asparouhov (2002) which show the MLR estimator to perform well when the dependent variable has kurtosis of a similar amount to the variable I have. My question is, should I expect regression residuals to be more normally distributed after having estimated a regression using MLR compared to a nonrobust estimator, or does it not work this way? Kind regards. 


I don't think the residuals will be different because ML and MLR do not differ in terms of estimates, only in terms of SEs. 


Hello, Why is maximum likelihood is preferable to weighted least squares when comparing the fit of nested measurement models with categorical indicators? Do you have a reference on this? 


I don't know that ML is preferable in this case. 


dear dr. Muthen i want to modified WLSMV and i want to change link function to another function.my question can i change the link function in WLSMV. regards 


WLSMV only allows the probit link function. This is what makes WLSMV modeling very general because you have access to residual correlation parameters which you don't as easily with logit link. 


hi dr. MUTHEN i want to ask you regarding WLSMV and WLS can i use these methods with categorical data without outliers. thanks alot 


Yes. 


hi i want to know what you means in adjusted mean and variance in WLSMV mehod? how you adjust mean and variance? thanks in advance. 


See the following paper which is available on the website: Muthén, B., du Toit, S.H.C., & Spisic, D. (1997). Robust inference using weighted least squares and quadratic estimating equations in latent variable modeling with categorical and continuous outcomes. 

Back to top 