Choice of estimator PreviousNext
Mplus Discussion > Categorical Data Modeling >
Message/Author
 Leigh Roeger posted on Sunday, January 30, 2000 - 9:51 pm
I am working on a multigroup meanstructure analysis. There are 2 groups (boys and girls) who rated their mothers on a 25 item - 4 point (strongly agree, agree, etc) rating scale. The items are very skewed. The scale consists of three sub-scales or factors. By simply adding items to the subscales girls (on average) rate their mothers better than boys on all three subscales.

I have been perplexed by the results produced from different estimators when testing the latent means. In particular with WLS (when factor loadings and threasholds are invariant between the groups) one of the latent means goes negative indicating that girls (the second group) rate their mothers more negatively than boys on this factor despite the raw data saying the opposite.

Any ideas on why or how this happens would be much appreciated.
 Linda K. Muthen posted on Tuesday, February 01, 2000 - 9:17 am
The only thing that comes to mind is that perhaps girls are not the second group. Do they have the higher code on the gender variable? If so, can you send your input or output and data so we can take a look at it and give you a better answer?
 Anonymous posted on Wednesday, June 01, 2005 - 2:21 pm
I don't know why there are differences between MPLUS probit regression and STATA probit regression. Is it because the default MPLUS probit is estimated by weighted least square while STATA probit is estimated by maximum likelihood?

If I specify "ANALAYSIS: ESTIMATOR=ML," then the coefficient and s.e. of the MPLUS logistic regression are the same as the STATA logit regression. Can I get the same results of probit regression in both MPLUS and STATA?

Thanks!
 Anonymous posted on Wednesday, June 01, 2005 - 2:23 pm
I don't know why there are differences between MPLUS probit regression and STATA probit regression. Is it because the default MPLUS probit is estimated by weighted least square while STATA probit is estimated by maximum likelihood?

If I specify "ANALAYSIS: ESTIMATOR=ML," then the coefficient and s.e. of the MPLUS logistic regression are the same as the STATA logit regression. Can I get the same results of probit regression in both MPLUS and STATA?

Thanks!
 bmuthen posted on Wednesday, June 01, 2005 - 5:59 pm
The Mplus "Sample Statistics" (requesting sampstat in the output) gives ML probit regression with a single dependent variable - this should agree with STATA. These sample statistics represent the first stage of the Mplus weighted least squares estimator.
 Marleen de Moor posted on Monday, September 05, 2005 - 4:12 am
Dear Linda and Bengt,

I have a few questions concerning categorical data and the TYPE=TWOLEVEL option.

1. Is it true that Mplus uses a logistic regression for all multilevel analyses (TYPE=TWOLEVEL) with a categorical outcome variable, because estimators available are MLR, ML and MLF, and not WLSMV? Is it therefore correct to interpret the beta coefficient as the log odds ratio?

2. In my model I would like to correlate the errors of my two dependent variables, of which one is normal and the other categorical. Is that somehow possible with the option TYPE=TWOLEVEL, or is the only way out using the options TYPE=COMPLEX with ESTIMATOR=WLSMV?

3. Do you have any plans to make it possible to use censored data with TYPE=TWOLEVEL in Mplus in the future?

Thank you very much in advance!
Kind regards, Marleen de Moor
 BMuthen posted on Monday, September 05, 2005 - 2:46 pm
1. Yes.

2. You cannot use WITH to specify a residual covariace when one or more outcome is categical in TWOLEVEL analysis with maximum likelihood. You could consider putting a factor behind the two variables as shown in Example 7.16.

3. Yes.
 Sally Czaja posted on Thursday, October 12, 2006 - 1:58 pm
I am testing a path model with 1 independent variable predicting 2 intermediate variables which predict a dependent variable. Each of the endogenous variables has 2-4 control variables. One of the intermediate variables is dichotomous, which makes the default estimator WLSMV. I’ve read in the MPlus manual and discussion board that this gives a probit regression and that I can specify the estimator as ML to get logistic regression, which makes sense for the dichotomous DV.

But what kind of regression is done with the continuous DVs (i.e., what are these path coefficients/how are they to be interpreted?)?

(continued in 2nd post)
Sally
 Sally Czaja posted on Thursday, October 12, 2006 - 2:10 pm
(continued from prior post re path model with 1 IV predicting 2 intermediate variables which predict a DV)

The path coefficients differ, sometimes substantially:
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Coefficients
For the path. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . WLSMV. . . . . ML
from IV to the dichotomous variable,. . . . . . . . . . .04 (n.s.). . . . .15 (p<.001)
from dichotomous variable to the final DV,. . . . . .28 (p<.001). . .53 (p<.001)
from IV to the other intermediate variable. . . . . . .10 (p<.01). . . .07 (p.05)
from other intermediate variable to the final DV,. .20 (p<.001). . .14 (p<.001)
What accounts for these differences?. They are both more & less than the approx. 1.7 scale difference between logistic and probit.. I would have thought the pattern of significance would be the same, even with different methods.

Finally, on what basis do I choose an estimator?. The dichotomous variable has a 76/24 split and skewness & kurtosis statistics are n.s., which suggests it could be treated as normally distributed.. But if I don’t declare it categorical, the fit becomes awful.

I’d really appreciate your help in understanding this area.
Sally
 Linda K. Muthen posted on Thursday, October 12, 2006 - 2:40 pm
The regression coefficients for the continuous dependent variables are simple linear regression coefficients.

The coefficients will differ between WLSMV and ML because one is probit and the other is logit. They are on a different scale. You should be comparing the ratios.

I would choose WLSMV with a 76/24 split.
 Sally Czaja posted on Friday, October 13, 2006 - 12:46 pm
Hi Linda
Sorry, but what ratios are you referring to in your 2nd paragraph?

Could you elaborate on why I should use WLSMV? I'll have to explain this to someone else.

Thanks.
 Linda K. Muthen posted on Friday, October 13, 2006 - 2:47 pm
The ratio of a parameter estimate to its standard error. It is the third column of the results.

It seems you want residual covariances. You can't have more than four with maximum likelihood because a model with four dimensions of integration is probably the maximum you can estimate. This is why I recommended WLSMV.
 Sally Czaja posted on Monday, October 16, 2006 - 12:37 pm
Hi Linda
Thank you for your quick responses last week. I have 2 more related questions:

If, as I understand, the coefficients for predictors of continuous DVs are simple linear regression coef. regardless of the estimator (WLSMV or ML), shouldn't they be identical? For 2 paths, I get .20 in WLSMV vs .14 in MLR (both p<.001); and -.13 (p<.01) in WLSMV vs -.05 (p<.05) in MLR (and smaller differences on other paths).

Also, for a predictor of the dichotomous variable, MLR gives an OR of 2.26 and est./SE of 4.92, while WLSMV gives an OR of 2.55 (using exp(Estimate*1.7)) with est./SE of 2.98. Should they be this far apart?

Thanks for your help.
 Linda K. Muthen posted on Tuesday, October 17, 2006 - 7:46 am
They should be the same. You would need to send me your inputs, data, outputs and license number to support@statmodel.com for me to see why they are not.

Odds ratios cannot be computed for probit regression coefficients.
 Ramzi Mabsout posted on Wednesday, October 15, 2008 - 4:39 am
Hi

From version 5, I see WLSMV can be used with TWO LEVEL. Are the loadings using CFA, categorical variables & no covariates probit coefficients?

Why I cannot conduct multi-group analysis with TWO LEVEL CATEGORICAL CFA & WLSMV? Is my only alternative to use integration in that case?

Thank you very much.
 Linda K. Muthen posted on Wednesday, October 15, 2008 - 10:06 am
Your only option in this case is numerical integration.
 Ramzi Mabsout posted on Wednesday, October 15, 2008 - 10:42 am
I also cannot conduct analysis with integration: I am requested to use KNOWNCLASS & MIXTURE. Why?
 Linda K. Muthen posted on Wednesday, October 15, 2008 - 11:27 am
When numerical analysis is required, multiple group analysis uses the KNOWNCLASS option and TYPE=MIXTURE.
 Richard Rivera posted on Tuesday, June 16, 2009 - 8:13 pm
I am conducting multiple logistic regression on a binary outcome. I have missing data, so I am allowing the default to use missing data theory, and I also included INTEGRATION=MONTECARLO;.

I would like to get unbiased estimates of confidence intervals and I know that I can’t use bootstrap CI when I am using the montecarlo integration.

For logisitic regression, there two options for estimation procedures (ML & MLR). For both of these, I asked for confidence intervals in outcome.

When I use ESTIMATOR = MLR I get the same point estimates then when I use
ESTIMATOR = MLR. So I assume that I get log odds (or odds ration) for either ML estimator.

However, I get different standard errors, which estimator should I use?
 Richard Rivera posted on Tuesday, June 16, 2009 - 8:25 pm
What I meant to ask:

When conducting multiple logisitic regression with missig data, which estimation procedure would give me the least bias estimates of the standard errors (or confidence intervals)?

Thanks
 Paul Silvia posted on Wednesday, June 17, 2009 - 5:59 am
When ML and MLR diverge in their SE estimates, MLR is generally more trustworthy. Broadly, though, this is often a sign to explore residuals, distributions, and possible influential cases.
 Cecily Na posted on Monday, February 07, 2011 - 3:07 pm
Hi Professors,
I am new to Mplus. I used the syntax MODEL = BASIC; Estimator = ML to generate a covariance matrix in Mplus. It was not same as the one produced in SPSS. What's the reason (suppose I treated all variables as continuous)?

Also, when can I use ML? Can I use it for ordered categorical variables?

Thanks!
 Linda K. Muthen posted on Monday, February 07, 2011 - 3:54 pm
It is likely that the sample sizes are not the same. If they are, you may be reading the data incorrectly and should send the problem along with your license number to support@statmodel.com.

Yes, ML can be used for ordered categorical data. See the ESTIMATOR option in the user's guide where there is a table that shows the cases when each estimator can be used.
 burak aydin posted on Tuesday, May 10, 2011 - 4:00 pm
Hi,
An article named "propensity score adjustment for multiple groups SEM" (Hoshino,Kurata & Shigemasu, 2006) uses weighted M estimator. Weights are propensity scores.
I wonder if WLS estimator does the same job?
Thanks.
 Bengt O. Muthen posted on Tuesday, May 10, 2011 - 5:23 pm
The Mplus WLS estimator is not based on propensity scores. M estimators are sometimes connected with GEE. The connection between GEE and WLSM is shown in

Muthén, B., du Toit, S.H.C. & Spisic, D. (1997). Robust inference using weighted least squares and quadratic estimating equations in latent variable modeling with categorical and continuous outcomes. Unpublished technical report.

which is on our web site under Papers, SEM.
 Bengt O. Muthen posted on Wednesday, May 11, 2011 - 3:33 pm
Perhaps this can be done using weighted ML, which we call quasi-ML in some of Asparouhov's writing on complex survey data analysis on our web site?
 burak aydin posted on Wednesday, May 11, 2011 - 4:06 pm
I made some further search and figured out that residual based GLS estimator is what I need. I know Mplus has traditional GLS estimator. Is there a way to modify GLS estimator to residual based GLS estimator? (Yuan&Bentler,1997,mean and covariance structure analysis: theoretical and practical improvements)

Furthermore, I d like to learn if there is an estimator which is robust to both non-normality and outliers?
Thanks.
 Bengt O. Muthen posted on Thursday, May 12, 2011 - 9:52 am
Don't know the answer to that. The Mplus GLS does not allow weights.

Outlier detection is available in Mplus - see the UG. MLR is in principle robust to model mis-specification, but how well that works with outliers I'm not sure of.
 Heike B. posted on Thursday, October 20, 2011 - 3:28 am
Dear Dres. Muthen,

I intend to build a manifest path model containing two exogenous variables and 5 endogenous variables. Three of them are mediators.

The observed variables are means from four-step likert scales (two variables actually are single items). That's why I wanted to treat the data as ordinal.

My sample is small (230 objects), the data is skewed and not normaly distributed.

I tried to estimate the model using WLMSV, however now I would like to add an interaction.

Besides one endogenous variable ended up with eleven categories, so MPLUS did not allow to decleare it as categorical.

Given all this -

1. which estimator would you recommend?

2. if an ML based estimator is recommended, should I declare all my variables as continous?

Many thanks in advance.
Heike
 Linda K. Muthen posted on Thursday, October 20, 2011 - 2:00 pm
If the original Likert variables have floor or ceiling effects, I would not recommend summing them.

I think you want an interaction between two observed variables. You can create that as the product of the two variables using the DEFINE command.

Both weighted least squares and maximum likelihood estimation can be used with categorical dependent variables.
 Miho Tanaka posted on Monday, February 13, 2012 - 11:01 am
Hi,

I have been working on a SEM for my dissertation. The primary outcome in my model is a binary (whether participant did a hepatitis B screening or not). Predictors are three latent variables by non-normally distributed continuous factor indicators. By default, Mplus uses WLSMV estimator for both structural and measurement part. I would like to know what is happening to the measurement model if I allow the default estimator (WLSMV). That is WLSMV is used to non-normally distributed continuous factor indicators. For CFA (only for the measurement part), I may chose to use MLR, rather than WLSMV. Is there any significant difference by these two estimators? I understand both estimators are robust to non-normality.

Thanks for your advice.
 Linda K. Muthen posted on Wednesday, February 15, 2012 - 10:25 am
WLSMV is not robust to non-normality of continuous variables. I would use MLR.
 Owis Eilayyan posted on Tuesday, March 20, 2012 - 5:08 pm
Hello,

I am doing a path analysis. i have 5 intermediate continuous variables and one dependent variable.

I am not sure which type of estimation i should use?

Thanks
Owis
 Bengt O. Muthen posted on Tuesday, March 20, 2012 - 6:32 pm
I would use ML or MLR.
 Owis Eilayyan posted on Tuesday, March 20, 2012 - 9:06 pm
Hi again,

Thanks for your response. I used MlR and i got this error message:

"*** FATAL ERROR
THIS MODEL CAN BE DONE ONLY WITH MONTECARLO INTEGRATION."

is that because i have missing values?

Thanks
Owis
 Linda K. Muthen posted on Wednesday, March 21, 2012 - 7:08 am
Yes, you must have missing values on a mediator. Add INTEGRATION=MONTECARLO; to the ANALYSIS command.
 Owis Eilayyan posted on Wednesday, March 21, 2012 - 7:11 am
Ok, if i removed the missing, can i use MLR or ML estimator? i dont want to use WLSMV.

Thanks
Owis
 Bengt O. Muthen posted on Wednesday, March 21, 2012 - 7:39 am
When you add Integration=MonteCarlo you are still doing ML/MLR, it's just that you specify a certain algorithm for doing it.

Your dependent variable must have been categorical or count, in which case missing on mediators leads to numerical integration with MonteCarlo when using the ML or MLR estimator.
 Owis Eilayyan posted on Wednesday, March 21, 2012 - 7:46 am
Actually my independent variables have these missing values.

Thanks a lot
Owis
 Owis Eilayyan posted on Wednesday, March 21, 2012 - 10:01 am
Hello again,

i used Integration=MonteCarlo and ML/MLR estimator but i didnt have Chi-Square Value and RMSEA in the output, is it normally? also, i got a different results (i.e. different direction of relationships between variables) in ML/MLR versus WLSMV!
 Linda K. Muthen posted on Wednesday, March 21, 2012 - 11:00 am
When means, variances, and covariances are not sufficient statistics for model estimation, chi-square and related fit statistics are not available.

Please send the two outputs and your license number to support@statmodel.com.
 Owis Eilayyan posted on Wednesday, March 21, 2012 - 11:18 am
When i use WLSMV estimation, i get the fit statistics.

i am using my supervisor program, both of us dont know the license number. where is it written usually?

Thanks
Owis
 Linda K. Muthen posted on Wednesday, March 21, 2012 - 1:08 pm
With WLSMV, the statistics for model estimation are thresholds and correlations.

You can login to your account on the website and see it.
 Owis Eilayyan posted on Wednesday, March 21, 2012 - 1:17 pm
Sorry for bothering you,

but does that mean with WLSMV, i get a wrong result?

i got a good fit model with WLSMV!

Thanks
Owis
 Linda K. Muthen posted on Wednesday, March 21, 2012 - 3:53 pm
We don't make a habit of giving wrong results. WLSMV gives chi-square and related fit statistics.
 Owis Eilayyan posted on Wednesday, March 21, 2012 - 4:18 pm
One more question,
so with WLSMV, we get chi-square and related fit statistics while with ML/MLR we dont, is that true?
Also, if i use ML or WLSMV i get similar result, isnt it? that what i understood from your video!

Thanks
Owis
 Bengt O. Muthen posted on Wednesday, March 21, 2012 - 4:19 pm
To understand the different aspects of testing model fit in this situation, see

Muthén, B. (1993). Goodness of fit with categorical and other non-normal variables. In K. A. Bollen, & J. S. Long (Eds.), Testing Structural Equation Models (pp. 205-243). Newbury Park, CA: Sage

which is paper #45 at

http://pages.gseis.ucla.edu/faculty/muthen/full_paper_list.htm

This chapter makes the distinction between testing the underlying structure (as WLSMV does) versus testing the model against the data (which isn't always feasible as presumably in your case).
 Bengt O. Muthen posted on Wednesday, March 21, 2012 - 4:24 pm
ML and WLSMV tends to give similar results when the missing data are MCAR (missing completely at random) or MAR as a function of covariates.
 Mauricio Garnier-Villarreal posted on Thursday, April 19, 2012 - 8:04 am
Hi

I am running a simulation study with categorical indicators using the BAYES estimator, I have heard that Mplus uses two methods for handling categorical variables: tetrachorical correlation and direct ML. In the specific case of using the BAYES estimator, which method uses Mplus?

thank you
 Bengt O. Muthen posted on Thursday, April 19, 2012 - 10:57 am
Bayes does not use tetrachorics and does not use ML. But like ML, Bayes is a "full-information" estimator that uses all available data in an optimal way. It is equivalent to ML in its missing data handling. Bayes is an estimator in its own right. So Mplus offers 3 major estimators: WLSMV (which builds on tetrachorics/polychorics), ML, and Bayes.
 Owis Eilayyan posted on Monday, April 30, 2012 - 7:16 pm
Hello,

I would like to ask a technical question with Mplus. When i use WLSMV estimator, i get Chi-Square, RMSEA, and CFI values automatically.
My question is: can i get a Chi-Square, RMSEA, and CFI values with ML estimator?

Thanks
Owis
 Linda K. Muthen posted on Tuesday, May 01, 2012 - 10:31 am
With maximum likelihood and categorical variables means, variances, and covariances are not sufficient statistics for model estimation. Because of this, chi-square and related fit statistics are not available.
 Gabriel Nagy posted on Friday, March 01, 2013 - 11:29 am
Dear all,
I have some questions regarding the ODLL algorithm implemented in Mplus.
I’m running a large IRT model including many nonlinear parameter constraints (around 700). ML estimation on basis of the EM algorithm is no longer feasible and the constraints are not supported in the Bayes framework. I’ve tried out different algorithms and found out that ODLL (in combination with MLF) works well in reasonable time. Unfortunately, I was not able to find any documentation of the ODLL algorithm. I only found out that ODLL optimizes the observed data likelihood directly.

Is ODLL something like JML (Joint Maximum Likelihood)?

Is ODLL an iterative algorithm (Tech 8 doesn’t report an iteration history for ODLL)?

What is ODLL exactly doing? Are there any references about this algorithm that might be cited in a manuscript?

What about the performance of ODLL relative to other algorithms, such as EM? I suspect that there might be some reasons that the much slower EM algorithm is routinely used in the IRT framework.

Thank you for your help!
 Tihomir Asparouhov posted on Monday, March 04, 2013 - 11:00 am
ODLL stands for Observed data log-likelihood. The algorithm optimizes the log-likelihood using the Quasi-Newton method.

http://en.wikipedia.org/wiki/Quasi-Newton_method

You can look at Tech5 for the iterations.

Use the Mplus manual as a reference.

My experience is that in most cases (but definitely not in all cases) the default EMA algorithm is faster. EMA actually contains ODLL within it and is occasionally deployed.

My suggestion is to spend time simplifying your model constraints. There are 3 types of constraints listed in order of complexity

1) New parameters = function of model parameters

2) Dependent parameters = function of independent parameters

3) anything else

Try to use 1 and 2 as much as you can instead of 3. Model constraints can be written in many different ways and using the most optimal way can improve the estimation dramatically.
 Anna posted on Sunday, June 02, 2013 - 11:21 pm
Hello,

I have a model with five observed variables, A, B, C, D, and E. E is categorical. The model proposes an indirect link, A->C->D->E, while B moderates the A->C path. There are missing values on A, B, C, D. Sample size is around 250.

I would like to know which estimator is more appropriate for testing this kind of model: categorical outcome, aims to test moderated mediation effect, has missing values.

I have tried WLSMV, MLR, and BAYES. The results estimated through these three estimators are actually comparable, and the fit indices in WLSMV and the Bayesian PPC and PSR indicate good fit. I tend to favor Bayesian estimation because it handles missing data well and it does not require normal distribution. But I am not sure to what extent it is favored against the other two estimators in my situation. (I don't have specific estimation of the priors.)

Thank you very much for your help!
 Linda K. Muthen posted on Monday, June 03, 2013 - 10:55 am
Bayes and missing data handle missing data in the same way. I would choose them above WLSMV if there is a lot of missing data. You can use non-informative priors in Bayes.
 Anna posted on Monday, June 03, 2013 - 11:44 am
Dear Linda,

Thank you!

I would like to ask more about these estimators. Beside the difference in handling missing data, are there any other concerns in choosing among these methods?

1. Is WLSMV robust for models with interaction terms and nonnormal distribution of indirect effects (e.g., a*b term)? I read the Muthen, du Toit, and Spisic (2007) technical report and I think that WLSMV often underestimates SE when the sample is small and skewed.

2. I also wonder if I should correlate the IVs with the interaction term (and perhaps correlate the exogenous covariates) because WLSMV does not automatically do so in the sequential modeling.

3. For MLR, since bootstrapping is not allowed with numerical integration, will this be a big deal for estimation of indirect effects with nonnormal distribution?

Thanks!
 Linda K. Muthen posted on Wednesday, June 05, 2013 - 1:31 pm
1. You can use bootstrap with WLSMV.

2. The model is estimated conditioned on the exogenous variables. Their means, variances, and covariances should not be mentioned in the MODEL command. To obtain these values, do a TYPE=BASIC with no MODEL command.

3. If they have a non-normal distribution, this will not be taken into account.
 db40 posted on Friday, August 29, 2014 - 6:59 am
Hi Linda,

when estimating a model using the Bayes estimator and outcome variables are specified as binary - are the parameters linear or logit or?
 Linda K. Muthen posted on Friday, August 29, 2014 - 8:59 am
Probit.
 db40 posted on Sunday, August 31, 2014 - 5:21 pm
Linda thanks for clearing that up.

Might I ask if once probit is standardized- it gives comparable estimates as logistic?
 Linda K. Muthen posted on Sunday, August 31, 2014 - 6:28 pm
No. You should listen to our Topic 2 course video on the website where probit and logistic regression are discussed.
 Alyssa Thomas posted on Wednesday, September 24, 2014 - 8:30 pm
Hi Dr Muthen-
Sorry to have another question but I wanted to confirm my analysis method as I finally write up my results.

In my SEQ model my observed variables are continuous but my dependent/outcome is categorical (binary). I read that WLSMV is the preferred option for analysis of categorical outcomes and this has successfully provided me with the different measures of model fit.

However, I am wanting to compare several different models. These models all have different variables making the diff test not an option. Normally I would use AIC, as this takes into account the number of variables, but this is not available under WLSMV.

I understand that this is available using ML (although fit indices would not be) but this option is not available to select when I setup my analysis.It seems like either method does not provide some of the information I would normally used in model comparison.

Could you please advise how I might best compare models given the lack of AIC? I have more than one model that meets the criteria for a good fit. I could then consider r-squared and account for the number of variables in the model. Would that suffice? I just need to be able to justify my choice of model.

thanks in advance for your help
 Bengt O. Muthen posted on Thursday, September 25, 2014 - 4:27 pm
I don't know if by "these models all have different variables" you mean covariates or DVs. If the latter, AIC cannot be used because the different models have AIC in different metrics. If the former, for WLSMV you can use the largest set of covariates for all models and fix to zero the slopes of the covariates not included in certain models.
 Joost de Moor posted on Tuesday, May 05, 2015 - 6:12 am
Dear Professors,

I am running a SEM on a binary coded DV using the MLR estimator. Am I correct in assuming that if I do not specify that this is a categorical variable, MPLUS will treat it as continuous by default, providing me with OLS estimates?

Thanks a lot!
 Linda K. Muthen posted on Tuesday, May 05, 2015 - 8:31 am
Yes.
 Marcel Paulssen posted on Monday, June 22, 2015 - 3:40 pm
Dear All,

after a longer absence I am back working with MPLUS and have some probably fairly simple questions.

I have a model with one binary categorical DV and many continous latent variables (IVs and DVs), sample size is N = 127, I have a some missing values in the data (up to 3% on the different)indicators.

Which estimator should I use? I remember that WLS is quite demanding with respect to sample size.

What is the MPLUS syntax fo such a model - are there examples here or on the net?

Can anybody recommmend articles/books on the topic?

Any help is highly apppreciated!!!

Best

Marcel
 Linda K. Muthen posted on Monday, June 22, 2015 - 8:36 pm
See the FAQ on our website called Estimator choices with categorical outcomes.
 Marcel Paulssen posted on Tuesday, June 23, 2015 - 6:20 am
Thank you very much!
 Mahmoud A. Moussa posted on Sunday, December 06, 2015 - 11:15 am
Dear Dr. Muthen
I am testing a SEM with interaction effects.

the output said that:
THE ESTIMATED COVARIANCE MATRIX COULD NOT BE INVERTED.
COMPUTATION COULD NOT BE COMPLETED IN ITERATION 2.
CHANGE YOUR MODEL AND/OR STARTING VALUES.



THE MODEL ESTIMATION DID NOT TERMINATE NORMALLY DUE TO AN ERROR IN THE
COMPUTATION. CHANGE YOUR MODEL AND/OR STARTING VALUES.


the input was:
TITLE: tthis is an example of a SEM with
continuous factor indicators and an
interaction between two latent variables

DATA: FILE IS C:\Users\Moussa\Desktop\ex.dat;
VARIABLE: NAMES ARE a1-a13;
ANALYSIS: TYPE = RANDOM;
ALGORITHM = INTEGRATION;
ESTIMATOR = MLR;
MODEL: A BY a3-a6;
M BY a1 a2;
S BY a7-a9;
R BY a10 a11;
D BY a12 a13;
A ON R D S;
S ON M R;
R ON M;
RxD | R XWITH D;
A on RxD;

OUTPUT: TECH1 TECH8;
 Bengt O. Muthen posted on Sunday, December 06, 2015 - 5:57 pm
A general piece of advise is to build your model in small steps to see where things go wrong. To diagnose your particular analysis we need to see your input, output, and data - please send to Support along with your license number.
 Mahmoud A. Moussa posted on Wednesday, December 09, 2015 - 7:58 pm
Dear Dr. Linda
Can I do the interaction effects in SEM with the latent variables without the indices of it?
 Bengt O. Muthen posted on Thursday, December 10, 2015 - 2:44 pm
What do you mean by "without the indices of it"? If you mean that there are no fit indices when you use XWITH, that is true and is discussed in the Latent Variable Interaction FAQ.
 Luis Garrido posted on Friday, February 10, 2017 - 11:03 am
Dear All,

I am wondering if there is any particular reason why ULS is not implemented with continuous variables.

Thanks,

Luis
 Bengt O. Muthen posted on Friday, February 10, 2017 - 4:03 pm
ULS is scale dependent so analysis with continuous variables in different metrics is influenced not only by model fit but by how you scale your variables.
 samah Zakaria Ahmed posted on Tuesday, February 21, 2017 - 3:51 pm
What is the method of estimating the maximum likelihood in latent class model?
Is it EM algorithm or Newton Raphson or Both together?
 Bengt O. Muthen posted on Wednesday, February 22, 2017 - 12:30 pm
EM and if needed plus NR plus Fisher Scoring.
 Filipa Alexandra da Costa Rico Cala posted on Wednesday, March 08, 2017 - 1:23 pm
Dear Linda,

For my PhD research, I am currently working on a study, whose outcome variable is binary, and the independent variables are continuous. For evaluating the model fit of this model, I used the estimator WLSMV, because is the estimator used by default in the Mplus when we have binary or dichotomous outcomes. After I run the model in Mplus, the CFI was 0.967, the TLI was 0.956 and the RMSEA was 0.034, but the WRMR was 1.031. This later is not in accordance with the cut-offs of this estimator. Therefore, could you please tell me if I can use the ML estimator to see if the model fit increase? If I use this estimator, could you please let me know what would be the justification to use this estimator with categorical outcomes? How could I explain that I was using the ML with categorical outcomes? Is there any literature that you suggest which says that we can use ML with categorical outcomes, and which I can use to justify my choice for using this estimator with categorical outcomes who don't have a normal distribution? Many thanks in advance for your help,
 Bengt O. Muthen posted on Wednesday, March 08, 2017 - 6:13 pm
I would recommend that you ignore the WRMR.

See our FAQ:

Estimator choices with categorical outcomes
 Filipa Alexandra da Costa Rico Cala posted on Thursday, March 09, 2017 - 4:05 am
Thank you very much Dr. Muthen.
 Alice posted on Tuesday, October 03, 2017 - 5:49 am
I am running a SEM in which the dependent variable in the structural model is a dummy variable. The observed variables in the measurement model are categorical.

In the User's Guide, under "Mplus Output" (page 718 in chapter 18 in my version), it says that "For binary and ordered categorical observed dependent variables, the regression coefficients produced for BY and ON statements using a weighted least squares estimator such as WLSMV are probit regression coefficients."

I am confused about how the weighted least squares estimator can produce probit coefficients for the part related to the structural model? I would have assumed that probit coefficients required ML.

I looked into the technical appendices to see if that would answer my question. On page 18, it says that "There are three steps to the model estimation using wls. First, [...] when all variables in y are categorical, s is computed by a set of p probit regressions of each pair of y variables on all x variables." I wonder if the ML estimator is used in this first step - or how do you obtain the probit coefficients in this step?
 Bengt O. Muthen posted on Tuesday, October 03, 2017 - 11:40 am
Having a model with Probit regressions does not require ML but can also be done by WLSMV and Bayes estimators.

But you are right that the first step in WLSMV does use ML estimation of a probit regression. The WLSMV step is when the model is fitted to those sample probits. The model has some probit relationships.

Perhaps Topic 2 of our short course videos and handouts are helpful to you. You can also read my 1983 and 1984 articles on this - see our website under Papers, Structural Equation Modeling (at the bottom).
 Alice posted on Wednesday, October 04, 2017 - 5:49 am
Thank you for the answer and referrals to the other helpful materials. I have read the papers and am still slightly confused.

Going back to the technical appendices, in step 1 it says, "s is computed by a set of p probit regressions on each pair of y variables on all x variables.". The y and x stem from the measurement model. Am I correct in that this step also applies to the structural model when the dependent variable in the structural model is a dummy variabel?

In the WLSMV step, are the coefficients for the structural model re-estimated - such that the estimates from the first step are to be interpreted as a starting point for the third step?
 Bengt O. Muthen posted on Wednesday, October 04, 2017 - 3:12 pm
Q1: When you say "structural model" do you mean that your dummy DV is latent?

Q2: No, the estimates from the first step should be seen as sample statistics to which the model is fitted. The first step does not concern parameters of the structural model.
 Alice posted on Thursday, October 05, 2017 - 2:27 am
Thank you for your quick reply.

To clarify, say the SEM model is:

VARIABLE:
NAMES ARE y x1 x2 x3 x4;
CATEGORICAL ARE y x1 x2 x3;

MODEL:
f1 by x1* x2 x3;
y on f1 x4;
f1@1;

where:
- y is a dummy variable for whether employed (=1) or unemployed(=0).

I want to make sure that the structural model, "y on f1 x4;" is estimated as a probit - such that I get probit coefficients in that part and not the coefficients from (what economists call) a linear probability model, LPM.

Q1: The first question was whether the first step in the WLSMV also applies to the structural model in this example? I take it that the answer is "no" given your answer to Q2 is that the first step does not concern parameters of the structural model.

Q2: Then, in which step is the structural model in the example estimated when using the WLSMV estimator?

Q3: Finally, how do I obtain probit estimates for the structural model in the example - would I obtain this with the WLSMV estimator for the SEM model or does this require the ML estimator for the SEM model? This is basically what I am after.
 Bengt O. Muthen posted on Friday, October 06, 2017 - 6:05 pm
Yes, y on f1 x4; is estimated as a probit regression because y is declared as categorical.

Q1-Q2: Using your example, the first step of WLSMV is to do probit regressions of y, x1-x3 on x4 (without your model structure applied). This gives 4 probit slopes and 4*3/2=6 probit correlations. These 10 quantities are the sample statistics (I am not bothering to count the thresholds/intercepts). Your model has 5 parameters, 3 loadings and 2 slopes. Those 5 parameters are estimated in the last step of WLSMV by trying to make the model-implied 4 probit slopes and 6 probit correlations close to the corresponding sample statistics. This is the essence of Muthen (1983, 1984).

Q3: You can estimate this model also with ML and get the model estimates without first estimating sample statistics - just working with raw data. ML with link=probit will give very similar model estimates compared to WLSMV. If the multi-step WLSMV procedure still confuses you after our discussion and you reading my papers, perhaps you should stay with ML.
 Tibor Zin posted on Wednesday, October 03, 2018 - 4:45 am
Hello,

I would like to ask a question about the advantages of ML in comparison to the Bayesian estimator.

I am conducting a longitudinal path analysis, which includes three independent variables: change in X, change in Y, and Z1; and one dependent variable Z2. I estimated this model using ML estimator and everything went fine. Approximately, half of the observations were missing at Time 2 (i.e., change in X, change in Y, and Z2). Thus, I calculated variances of variables in the model.

In the additional step, I included a dichotomous independent variable W, which was very skewed: 1200 vs. 200 observations. Afterward, I estimated the effects of interactions between 1) X and W, 2) Y and W, on Z2. This time, the results depended on the estimator. If I used ML, interactions turned out to be insignificant, although the decomposed effects showed that X influenced Z2 if W was 0, not when W was 1. However, when I used Bayesian estimator the interaction turned out to be almost significant. I know that this analysis is very simple but please, could you help me with what estimator is better in this case? I could not find a convincing cue about what estimator to choose. Is one estimator more suitable than the other one?
 Bengt O. Muthen posted on Wednesday, October 03, 2018 - 5:39 pm
Interactions can have a non-symmetric sampling distribution for which non-symmetric CIs are preferred. Bayes gives this automatically but with ML you have to use bootstrapping. If you didn't use bootstrapping with ML, I would trust the Bayes results more.
 Natalie Riedel posted on Tuesday, March 12, 2019 - 9:14 am
Dear Drs. Muthén,

Path analyses and mplus are still new to me and I am asking for advice in this forum.

My sample size is > 1,500. Currently I am trying to set up a manifest path model involving
- a binary final dependent variable,
- a binary mediating variable (though I am not interested in the quantification of indirect effects here),
- two ordinal Likert scaled variables (the sum of two 6 point items each, or just six point Likert) predicting the final dependent binary variable as well as the binary mediating variable.
These ordinal variables are
o theoretically correlated with each other and
o predicted by other exogenous variables.
- These exogenous variables (covariates) are binary, some are dummy variables, or ordinal 6-point Likert scaled.

Given the different scale levels and non-normality of ordinal variables, MLR (or ML with bootstrapping?) might be the easiest estimator. ORs are easier to grasp for my audience than coefficients from the probit function. However, the output just reveals information on the unexplained (residual) variance of the two ordinal variables (treated as continuous by the program).
There are no residual variances for the binary variables, and there are no overall model fit values (except the measures used for model comparisons; inclusion of covariates led to a slight reduction of AIC and sample size adjusted BIC).
 Natalie Riedel posted on Tuesday, March 12, 2019 - 9:15 am
For this reason, I re-ran the analyses using WLSMV in order to get a better impression. Now, for the probit-results, I am happy with the R-Squares displayed for the binary variables, as well. However, depending on the “adjustment set”, inclusion of covariates (partly) destroyed model fit in terms of RMSEA, CFI/TLI, SRMR, and the Chi-Square Test of Model easily turned from the desired non-significance into significance. Could this indicate an “over-adjustment”?
My questions are:
1) How can I make sure that the MLR-model is ok? As reviewers, which model fit information would you expect using MLR?
2) From one posting on WLSMV, I gathered that the Chi-Square Test of Model is only relevant for comparisons between nested models (which is not my concern here). Is this true? Should I rely on RMSEA & CFI for binary outcomes as recommended by Yu’s dissertation 2002?
3) Would you recommend building the model with the WLSMV estimator, and once RMSEA & CFI/TLI are ok, re-running the model with MLR?
4) In addition, for sensitivity analyses, I also dichotomized the ordinal variables (that, as written above, depend on other exogenous variables). Once indicated as categorical variables, mplus requires TYPE=MIXTURE and PARAMETERIZATION=RESCOV, but this does not work out for manifest path analysis.
Many thanks.
 Bengt O. Muthen posted on Wednesday, March 13, 2019 - 4:04 pm
There is one key consideration when you have a binary variable that act as both DV and IV in the model (a "mediator"). When it is an IV, that is, predicting another variable, you can use either (1) the observed binary variable or (2) the M* underlying continuous latent response variable. The choice is substantive rather than statistical. With ML, only (1) can be done. With WLSMV, only (2) can be done. With Bayes, either one can be done. ML doesn't give an overall fit statistic while WLSMV and Bayes do.

Regarding the poor fit when covariates are added, perhaps direct effects to later outcomes in the chain are not included.

ML fit can be judged by TECH10. WLSMV and Bayes overall fit is as in regular SEM model checking testing your model (H0) against a completely unrestricted model (H1). H0 is indeed nested within H1.

I don't see why mixtures would be needed - for this we would need to see your full output.

Also, we ask that postings be limited to one window; for messages that need to be longer, please send to Support along with your license number.
 Jingjing Li posted on Friday, November 01, 2019 - 8:05 am
Hi, I have a mediation analysis. The outcome Y, mediator M, exposure X, and covariate Z are all binary.
I want to use the ML estimator as the logit scale is easier to interpret. I encounter problems about the
indirect effect of X ON Y. Here is my Mplus code:

VARIABLE:
NAMES = Y M X Z;
USEVARIABLES = Y M X Z;
CATEGORICAL = Y M X;

ANALYSIS:
TYPE = general;
ESTIMATOR = ML;

MODEL:
Y ON M Z;
M ON X Z;
X ON Z;

MODEL INDRECT:
Y IND M X;

Then I got the indirect effects are all 0 like this:

Two-Tailed
Estimate S.E. Est./S.E. P-Value

Tot natural IE 0.000 0.000 -0.904 0.366
Pure natural DE 0.000 0.000 999.000 0.000
Total effect 0.000 0.000 -0.904 0.366

Could you please help me with this problem? Thanks!
 Bengt O. Muthen posted on Friday, November 01, 2019 - 5:21 pm
Don't include X on the Categorical list - it needs to be treated as a covariate.
 Jingjing Li posted on Friday, November 01, 2019 - 6:55 pm
Hi Dr. Muthen, thanks for your reply. I have a following up question.

1. If I treat X as a covariate and don't include it on the Categorical list, how should I deal with it when it is also a dependent variable in this function:
X ON Z?

Thanks!
 Bengt O. Muthen posted on Saturday, November 02, 2019 - 5:21 pm
I would delete X ON Z. If X is your "exposure variable", then Z is just a control variable (having X ON Z would be strange). Deleting X ON Z, X and Z are then allowed to freely covariate but you don't estimate their distribution in the modeling.
 Jingjing Li posted on Sunday, November 03, 2019 - 8:58 am
Thanks for your recommendation.
I have two more questions.
1. For using ML estimator, is the odds ratio of the indirect effect calculated as Exp(indirect effect)?

2. I also tried using the estimator of WLSMV for the same model, with X in Categorical and X ON Z.
Mplus output gives the indirect effect of X ON Y. It is the product of the direct effect of X ON M and the M ON Y. Is it correct?
Thanks!
 Bengt O. Muthen posted on Monday, November 04, 2019 - 9:29 am
1. No. A special odds ratio effect definition is used - see Chapter 8 of our RMA book or the Muthen (2011) paper on our website.

2. Yes, but those are not counterfactually-defined effects but regular effects using the continuous latent response variables X*, M*, and Y*. Again, I refer to our RMA book which explains the difference.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: