Fit indices PreviousNext
Mplus Discussion > Multilevel Data/Complex Sample >
Message/Author
 Anonymous posted on Monday, June 25, 2001 - 11:57 am
We are running a multi-level model (Version 2)in which we are testing the stability of a measurement model over two time periods. The model converges, and we have a non-significant chi-square value with 17 degrees of freedom. However, our RMSEA is 0.000, CFI is 1.0, and TLI is 1.021. Can you provide some insight into why we would be getting these results?
 bmuthen posted on Monday, June 25, 2001 - 4:42 pm
Your RMSEA, CFI and TLI values suggest a very good model fit and chi-square does not disagree with that.
 Tom Munk posted on Thursday, November 17, 2005 - 12:19 pm
I am testing a multilevel SEM. MLR provides CFI, TLI, RMSEA, SRMR(b), and SRMR(w). But it also provides a warning against using chi-square difference tests. Can all of these fit indices be used with the same standards as a single-level SEM?

A web search finds class notes from Newsome suggesting:
>.95 for CFI and TLI
<.08 for SRMR
<.06 for RMSEA
 Linda K. Muthen posted on Thursday, November 17, 2005 - 2:18 pm
The studies used to come up with cutoffs for fit measures have not been based on multilevel analysis so may not be appropriate for these models.
 Pancho Aguirre posted on Wednesday, November 23, 2005 - 10:44 am
Hello Linda and Bengt,


I'm wondering how do we determine a good fitting model in multilevel analysis. By looking at the output from Mplus user's guide example 9.9. Test of model fit are given as:

TESTS OF MODEL FIT
Loglikelihood
H0 Value -6752.350
Information Criteria
Number of Free Parameters 23
Akaike (AIC) 13550.700
Bayesian (BIC) 13663.578
Sample-Size Adjusted BIC 13590.529
(n* = (n + 2) / 24)

What are the cutoffs for these values? From what I understand the more negative the loglikelihood gets the better model fits. But is there a statistical test for this value? Can we transform it to a chi-square distribution? If yes, can we conduct a chi-square difference test between an unconditional model (no predictor at level two) and the target model?

thanks in advance for your help,

Pancho
 bmuthen posted on Wednesday, November 23, 2005 - 6:35 pm
For general multilevel models, no overall fit index has been developed. The usual indices are based on covariance matrix fitting and this is not necessarily relevant when as with random slope models the variance varies across subjects. This is why you don't see fit indices in multilevel programs. Instead you should do what most statisticians do, namely consider a sequence of nested models and get LR chi-square tests by 2 times the log likelhood difference.
 Kätlin Peets posted on Tuesday, April 24, 2007 - 12:38 pm
Just to make sure. I am being asked to report N for the chi-square (model fit index). Am I correct when I assume that in case of multilevel modeling, it is cluster size*number of individuals (number of observations in the output)?

Thank you!
 Linda K. Muthen posted on Tuesday, April 24, 2007 - 2:42 pm
In multilevel modeling the number of observations reported is the N. N is only the number of clusters if the unit of analysis is cluster.
 Kätlin Peets posted on Wednesday, April 25, 2007 - 5:34 am
Thank you.
As I am looking at whether individuals differentiate between different conditions, each individual forms a cluster. So, for the chi-square, I should report the number of clusters, and in my case, it is the number of individuals.
Did I understand you correctly?
 Linda K. Muthen posted on Wednesday, April 25, 2007 - 8:21 am
I don't understand where your clustering comes in if you have one individual per cluster.
 Kätlin Peets posted on Wednesday, April 25, 2007 - 9:29 am
We use multilevel modeling so that conditions within individuals form the within level (we are looking at variance between different conditions within individual) and individuals form the between level (examining variance between individuals across the conditions).
 Linda K. Muthen posted on Wednesday, April 25, 2007 - 9:36 am
N is the number of individuals and you have several members (conditions) per cluster.
 Kätlin Peets posted on Wednesday, April 25, 2007 - 10:47 am
Yes, that is the case. So, I will report the number of individuals (clusters) for the chi-square.
Thank you for your time.
 Hsien-Yuan Hsu posted on Wednesday, June 06, 2007 - 11:32 am
Dr. Muthen,

I got the results after running example 9.6 in MPLUS user guide. I got the Chi-Square Test of Model Fit (3.864) and its df is 17.

Q1. How does MPLUS calculate the df ??

I got CFI, TLI, AIC, BIC, RMSEA, SRMR.

Q2. Are these fix indexes for overall model??
Q3. Why does MPLUS provide SRMR for Between and Within models, respectively? Could I get other fix indexes for Between and Within models?


Many thanks,
Hsien-Yuan Hsu
 Linda K. Muthen posted on Wednesday, June 06, 2007 - 5:20 pm
1. In this example, the sample statistics consist of 4 means for the y variables, 10 variances and covariances for the y variables on the within level, 8 covariances between the x and y variables, 10 variances and covariances for the y variables on the between level, 4 covariances between the w and y variables. This is a total of 36. There are 19 free parameters so there are 17 degrees of freedom.

2. Yes.

3. This is the only fit statistic that is provided for each part of the model.
 Sanjoy Bhattacharjee posted on Saturday, September 29, 2007 - 10:47 am
Prof Muthen,

Suppose we have to choose between HLM-2 and HLM-3. Which test procedure should we use. Is there any model selection criterion for the HLM setup? We need to cite something similar to Hausman test, the test we use to select between fixed effect and random effect model (within panel data framework).

Could we do the test in MPlus?

Thanks and regards
Sanjoy
 Bengt O. Muthen posted on Sunday, September 30, 2007 - 10:35 am
Perhaps you can settle the issue of how important level 3 clustering is by making a comparison of two runs. First use Type = Complex Twolevel where Complex deals with clustering on level 3 and Twolevel deals with clustering on level 2. Compare the SEs you get there with those of Type = Twolevel and ignoring the level 3 clustering.

Mplus does not do Hausman testing.

The choice between fixed and random effects is another, broader matter.
 Sanjoy Bhattacharjee posted on Sunday, September 30, 2007 - 2:32 pm
Thank you Professor. I can see the point you made.

Regards
Sanjoy
 Joyce Kwan  posted on Thursday, July 03, 2008 - 1:26 am
Dear Professors,

I would like to ask if the interpretation of fit indices such as CLI, TLI, RMSEA for multilevel model the same as that for single level model. I read from the above that it may not be appropriate for us to use the cutoffs for fit measures that are used for single level models on multilevel models. So is there other rule of thumbs for using these fit indices for multilevel models. How do we use fit indices such as CLI, TLI and RMSEA for evaluation of model fit?

Besides, I have fit a single level model and multilevel model for the same data set. The resulting TLI and RMSEA showed a great drop in model fit but the CLI remain more or less the same. Why would it be in this case?

Thanks
 Linda K. Muthen posted on Thursday, July 03, 2008 - 4:17 pm
I do not know of a study where cutoffs have been studied for multilevel models. I would use those for single level models.

I can't explain your findings in comparing a single level and multilevel model.
 Elif Çoker posted on Wednesday, May 27, 2009 - 8:36 am
Hi,

My first question is which formula is used to calculate the loglikelihood and the concerned covariance matrices for multilevel path models in Mplus? Can you please give an exact reference?

And lastly is there a new option to save the matrices in the normal exact dimensioned matrix format not like a mixed one saved disorderly?

Thanks so much already,

Elif
 Linda K. Muthen posted on Thursday, May 28, 2009 - 10:34 am
See the following paper for random intercepts:

Muthén, B. (1990). Mean and covariance structure analysis of hierarchical data. Paper presented at the Psychometric Society meeting in Princeton, NJ, June 1990. UCLA Statistics Series 62.

You can download it from the following link where it is paper #32:

http://www.gseis.ucla.edu/faculty/muthen/full_paper_list.htm

See the following paper which is on our website for random slopes:

Muthén, B. & Asparouhov, T. (2008). Growth mixture modeling: Analysis with non-Gaussian random effects. In Fitzmaurice, G., Davidian, M., Verbeke, G. & Molenberghs, G. (eds.), Longitudinal Data Analysis, pp. 143-165. Boca Raton: Chapman & Hall/CRC Press.
 Sally Czaja posted on Thursday, December 03, 2009 - 11:43 am
Hello. I'm trying to find out whether a whole group model or one with two groups better fits my data.
Nested model syntax:
USEVARIABLES ARE female raceWb ageint1 acrimyn poverty neighpov;
CLASSES= c(2);
KNOWNCLASS = c(grp=0 grp=1);
WITHIN = female raceWb ageint1 poverty;
CLUSTER = census;
BETWEEN = neighpov;
CATEGORICAL = acrimyn;
ANALYSIS: TYPE= TWOLEVEL mixture;
Model: %WITHIN%
%OVERALL%
acrimyn ON female raceWb ageint1 poverty;
%BETWEEN%
%OVERALL%
acrimyn on neighpov;

In the comparison model, everything is the same as above except for the following model specification.
Model: %WITHIN%
%OVERALL%
acrimyn ON female raceWb ageint1 poverty;
%c#1%
acrimyn ON female raceWb ageint1 poverty;
%c#2%
acrimyn ON female raceWb ageint1 poverty;
%BETWEEN%
%OVERALL%
acrimyn on neighpov;
%c#1%
acrimyn on neighpov;
%c#2%
acrimyn on neighpov;

1) Is my modeling approach correct. 2) I'm using the loglikelihood difference testing to compare the fit of the models. Is this correct? Are there any other ways comparing model fit? 3) If the loglikelihood difference test is not significant does that indicate that the nested model better explains the data than the comparison? Thank you.
 Linda K. Muthen posted on Friday, December 04, 2009 - 9:24 am
This sounds correct. If the constrained model does not worsen model fit, then the parameters are equal across groups.
 Murphy T. posted on Wednesday, October 19, 2011 - 12:15 am
Dear professors,

I estimated a two-level model and get the following fit indices for my model:
RMSEA: 0.058
CFI: 0.967
TLI: 0.845
SRMR (within): 0.010
SRMR (between): 0.194

The RMSEA and CFI seem to look quite good (by conventional cutoff values), but the TLI and SRMR (between) seem to indicate poorer fit. What could be the reason for these discrepancies? Are you aware of cutoff values for these fit indices for multilevel models? Thank you very much!
 Linda K. Muthen posted on Wednesday, October 19, 2011 - 3:02 pm
Lack of model fit can be caused by many problems. I don't know of any cutoffs specific to multilevel models.
 Eva posted on Wednesday, September 26, 2012 - 5:51 am
Would anyone happen to know if by now some guidelines have been supported in evaluating cut-off values for fit indices in multilevel SEM?
 Linda K. Muthen posted on Wednesday, September 26, 2012 - 1:48 pm
You should post this on SEMNET or Multilevel net. They should know this.
 Karen Kegel posted on Monday, May 06, 2013 - 9:40 am
Dear Drs. Muthen,

I am testing SEM model fit for 4 sequential, multiple mediation models. The fit index results I get with MPlus are all the same, however--which is highly unanticipated. One example of a model is:

UDO ON HS;
SC ON HS UDO;
PD ON HS UDO SC;

Another is:

SC ON UDO;
HS ON UDO SC;
PD ON UDO SC HS;

These are very different models, yet I get the same fit index results for both. Is there something I'm missing in my syntax that should be used to indicate the sequence of mediations each model proposes? Thanks!
 Linda K. Muthen posted on Monday, May 06, 2013 - 11:00 am
Please send the two outputs and your license number to support@statmodel.com.
 Aleksandra Holod posted on Thursday, November 14, 2013 - 3:51 pm
I am testing a path model and receiving fit indices that appear unrealistically high (RMSEA=0, TLI/CFI=1) in model output that includes an error message saying,
"THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES MAY NOT BE TRUSTWORTHY FOR SOME PARAMETERS DUE TO A NON-POSITIVE DEFINITE FIRST-ORDER DERIVATIVE PRODUCT MATRIX. THIS MAY BE DUE TO THE STARTING VALUES BUT MAY ALSO BE AN INDICATION OF MODEL NONIDENTIFICATION. THE CONDITION NUMBER IS 0.213D-16. PROBLEM INVOLVING PARAMETER 226."

I am using MLR estimation with survey weights and clustered standard errors to account for nested sampling design (children within schools).

I believe the error message is due to the fact that I have dichotomous covariates, which I allow to covary for the purposes of the FIML approach to missing data. My sample size is over 16,000 and I have no latent variables, so I believe this is not a model identification problem.

When I remove parameter 226, I get the same error message for another covariance between two dichotomous covariates. I have also experimented with setting numerous other covariate paths to zero, but the fit indices and error message remain the same (except for the parameter number).

So, should I assume that I have a model with excellent fit and ignore the error message. Or is there some other alternative?

Thank you,
Aleksandra
 Linda K. Muthen posted on Thursday, November 14, 2013 - 6:16 pm
Remove the WITH statements involving the dichotomous covariates. If the message disappears, you can put the statements back and ignore the message. It is triggered because the mean and variance of a dichotomous variable are not orthogonal.
 Aleksandra Holod posted on Friday, November 15, 2013 - 9:28 am
Hi Linda,

Yes, when I remove the WITH statement for the covariates, the error message goes away. Thank you for that suggestion!

However, when I remove that WITH statement, I still have perfect fit statistics (RMSEA=0, CFI/TLI=1). That seems implausible to me. It is really possible for an empirical model to have perfect fit?

Could this be caused by shared method variance? The data for independent and mediator variables was gathered via survey from a single respondent, i.e. the mother of each child. The dependent variables are direct assessments of children's literacy and numeracy skills.

Thank you for any insight you can provide.

Best,
Aleksandra
 Linda K. Muthen posted on Friday, November 15, 2013 - 11:50 am
Your model must have zero degrees of freedom to get those values.
 Aleksandra Holod posted on Friday, November 15, 2013 - 12:04 pm
Actually, the model has 2 degrees of freedom.
 Aleksandra Holod posted on Friday, November 15, 2013 - 12:15 pm
So, to clarify, my question is: How is it possible for a model with 2 degrees of freedom to have perfect fit statistics? Thank you!
 Linda K. Muthen posted on Friday, November 15, 2013 - 1:19 pm
Please send the output and your license number to support@statmodel.com.
 Yao Wen posted on Thursday, February 13, 2014 - 11:26 am
Hi Linda,

I ran a cross-classified model using Bayesian estimator. I found no model fit indices were reported in the output. I attached part of my syntax below.

ANALYSIS: TYPE = crossclassified random; ESTIMATOR = BAYES;
PROCESSORS = 2; CHAINS=2; BITERATIONS = (20000);
MODEL:
%WITHIN%
s1-s4 | lit_w by y1-y4 ;
lit_w on hisp;
y1-y4;
lit_w;
[lit_w@0];

%BETWEEN cl2%
s1-s4 ;

%BETWEEN tid%
s1-s4 ;

OUTPUT: TECH1 TECH8 TECH4 TECH10 STANDARDIZED SVALUES;

I received warning messages below.

*** WARNING in OUTPUT command
STANDARDIZED (STD, STDY, STDYX) options are not available for TYPE=RANDOM.
Request for STANDARDIZED (STD, STDY, STDYX) is ignored.
*** WARNING in OUTPUT command
TECH4 option is not available for TYPE=RANDOM.
Request for TECH4 is ignored.
*** WARNING in OUTPUT command
TECH10 option is only available with categorical or count outcomes.
Request for TECH10 is ignored.


Is there a way to obtain model fit indices in this case?

Thank you for your time!
 Bengt O. Muthen posted on Friday, February 14, 2014 - 11:19 am
That has not been developed yet.
 Ellen posted on Saturday, June 28, 2014 - 12:49 am
I was running a multi-level path analysis with binary variables(mediator) and to use MLR estimator.
I also used Type=complex twolevel random. I have some questions about the model.

1. I was not getting regular fit indices(chi-sqare, CFI, TLI, RMSEA), only reported AIC, BIC.
I wonder if I can get chi-square and other fit indices in for the fitted model.

2. I'd like to compute marginal effects of indirect effect.
The model is as follows.
Y on M X
M on X

M is binary, Y is continuous variable.
Generally when compute margianl effect of binary variable, we multiply un-standardized coefficient by (1- mean of latent variable).
For the marginal effects of indirect effect, do we have to use general method or other ways?
 Bengt O. Muthen posted on Saturday, June 28, 2014 - 6:29 pm
1. These are not available with Type=Random because a random slope implies that the DV variance changes over observations so that there isn't a single covariance matrix to test.

2.This is a big and complex topic that is complicated by the binary mediator and the two-level model with Type=Random. My mediation papers on our website deal with the first issue and our Topic 7 handout and video deals with the second issue.

I am not aware of the approach of that multiplication you mention.
 Ansylla Payne posted on Tuesday, March 17, 2015 - 12:58 am
Hi Profs . Muthen

My fit indicies for one of my models is as follows (multilevel - with only moderating variable at level 2; interacting variable at level 1)

RMSEA (Root Mean Square Error Of Approximation)

RMSEA 0.125
CFI 0.825
TLI 0.703

Chi-Square Test of Model Fit for the Baseline Model

Value 1036.788
Degrees of Freedom 78
P-Value 0.0000

SRMR (Standardized Root Mean Square Residual)

Value for Within 0.196
Value for Between 0.000

1. Is there any empirical reference you can provide with respect to assessing fit of a mujltilevel model?
2. Is there anything i can do to improve this fit?
 Bengt O. Muthen posted on Tuesday, March 17, 2015 - 7:58 am
You may want to ask this general analysis question on SEMNET. You need to show the full input for the model. Also include the chi-square fit for the model.
 Ansylla Payne posted on Tuesday, March 17, 2015 - 5:53 pm
Ok Dr. Muthen,

Certainly, thank you.

Regards
 Qiao Hu posted on Wednesday, November 25, 2015 - 6:51 am
Do there have any cutoffs for ppp-value of the model fit in BSEM?
 Bengt O. Muthen posted on Wednesday, November 25, 2015 - 10:10 am
Not really, but see the papers on our website:

Asparouhov, T. & Muthén, B. (2010). Bayesian analysis of latent variable models using Mplus. Technical Report. Version 4. Click here to view Mplus inputs, data, and outputs used in this paper.
download paper contact second author

Asparouhov, T. & Muthén, B. (2010). Bayesian analysis using Mplus: Technical implementation. Technical Report. Version 3.
download paper contact second author
 May Lee posted on Tuesday, November 15, 2016 - 10:55 am
Hi Professors,

I was running level 1 model with nested data using type=twolevel analysis(level 2 only has 21 clusters).And the MODEL FIT INFORMATION is below:


Number of Free Parameters 22

Loglikelihood

H0 Value -269.424
H0 S 1.5769
for MLR
H1 Value -269.432
H1 Sc 1.5769
for MLR

Information Criteria

Akaike 582.849
Bayesian 661.184
Sample-Size Adj 591.435
(n* = (n + 2) / 24)

Chi-Square Test of Model Fit

Value 0.000*
DF 0
P-Value 1.0000
SCF 1.0000
for MLR

RMSEA

Estimate 0.000

CFI/TLI

CFI 1.000
TLI 1.000

Chi-Square Test of Model Fit for the Baseline Model

Value 30.388
DF 19
P-Value 0.0471

SRMR

Within 0.004
Between 0.000


My question is "Chi-Square Test of Model Fit" value=0 p value=1.Does this mean this model is so bad? How can I deal with it?

Thanks!
 Bengt O. Muthen posted on Tuesday, November 15, 2016 - 5:34 pm
When you have zero degrees of freedom the model is saturated/just-identified and a test of model fit is not available.
 May Lee posted on Tuesday, November 15, 2016 - 10:13 pm
Thank you Bengt.
 Rachel Perkins posted on Wednesday, January 18, 2017 - 12:43 am
Hello, I am testing a SEM with at least one categorical dependent variable. I have used the WLSMV estimator and my results are as folllows: chi-square(376, N = 865) = 987.996, p < .01, CFI = 0.828, RMSEA = .043.
The CFI value indicates that my model does not fit the data well but the RMSEA seems to indicate that it does. My model is complex (one latent variable and 23 observed variables) and I am wondering if the CFI is not the best indicator of model fit to use in this context. Also, my data is non-normal and I am wondering if this could affect the fit statistics.&#8206;
 Bengt O. Muthen posted on Wednesday, January 18, 2017 - 11:08 am
Look at modification indices to see if the model can be improved.
 Sophie Dan posted on Wednesday, April 26, 2017 - 7:24 am
Dear Dr. Muthen,

If I just do a between level EFA, the model fit cannot be accepted, but when do the twolevel EFA together with within level, the model fit is acceptable, can I just use the twolevel EFA directly? Can the poor model fit when doing between level EFA seperately due to a limited number of cluster? (For example, with 13 variables but only 45 clusters?) If the cluster number is limited, even the twolevel(within+between) result is not trustworth?

Thanks!
 Bengt O. Muthen posted on Wednesday, April 26, 2017 - 2:06 pm
See my general answer.
 Min Zhang posted on Thursday, May 25, 2017 - 12:55 pm
Dear Dr Muthen,

I am running a path analysis model with ordinal categorical variables. This is also a multiple group analysis. I am worried about my model fit.

RMSEA (Root Mean Square Error Of Approximation)

Estimate 0.048
90 Percent C.I. 0.046 0.051
Probability RMSEA <= .05 0.828

CFI/TLI

CFI 0.898
TLI 0.849


1. I wonder why my CFI is too low. I understand that CFI is a ratio between null model and proposed model and that a low CFI may indicate high correlation between variables.

I used the option modindices but none of them fits my theory. Could you please suggest how I can improve this model?

2. I did not use latent factor modelling. This is merely a Path Analysis with ordinal categorical variables. Should I even be worried about the model fit? I think a difftest may be more reasonable to indicate explanatory power of specific variables.


Many thanks for your time.
Regards,
Min
 Bengt O. Muthen posted on Thursday, May 25, 2017 - 6:36 pm
This is a good set of questions for SEMNET.
 Kate Barford posted on Sunday, May 28, 2017 - 8:12 pm
Hi,

When I run a two level model where I'm just predicting a random intercept I get the standard fit indices (RMSEA, CFI, TLI, SRMR ect...). But when I run a two level model with a cross level interaction, so I am predicting a random intercept and a random slope, I don't get these fit indices. Reviewers want us to report fit indices, is there anything I can report for these analyses?

(note. the predictors in all analyses are modelled as latent variables as X by X1, X2, X3, X4. So the models are two level SEMs.)

Thanks.
 Linda K. Muthen posted on Monday, May 29, 2017 - 6:47 am
Chi-square and related fit statistics are available only when means, variances, and covariances are sufficient statistics for model estimation. This is not the case with TYPE=RANDOM.
 Nik Schulte posted on Tuesday, November 07, 2017 - 1:44 am
Dear Ms Muthen

what is the correct interpretation of the SRMR between and the SRMR within in the output of multilevel SEMs?

Many thanks in advance!
 Tihomir Asparouhov posted on Tuesday, November 07, 2017 - 5:20 pm
We use formula (128) in
http://statmodel.com/download/techappen.pdf
applied to the within level and the between level separately as if they are two separate groups. For illustration - run User's Guide example 9.6 with the additional output option:
output:residual. You will find in that output option that residuals for the covariance parameters are produced separately for the within and the between level and those are the basis (on correlation scale however) for the two SRMRs. The model estimated within and between variance covarance matrices are compared to the unconstrained two-level model. The two SRMRs allow you to evaluate the model fit separately for the two levels.
 MS, Kim posted on Wednesday, December 13, 2017 - 7:49 am
Hello. I have a question about the parameters number of unrestricted model in MSEM.

For example, There are 2 observed variables(y1g, y2g) in the within level and 1 observed variable(z) in the between level in the model including means.

How can I calculate observed information(one factor of the degrees of freedom related to model identification) in MSEM?
 MS, Kim posted on Wednesday, December 13, 2017 - 8:20 am
Hello. I have a question about the parameters number of unrestricted model in MSEM.

For example, There are 2 observed variables(y1g, y2g) in the within level and 1 observed variable(z) in the between level.

usevar=y1g y2g z;
between=z;

model:

%between%
y2g on y1g z;
y1g on z;

%within%
y2g on y1g;


in this case,
Number of Free Parameters is 22.
And Degrees of Freedom (under Chi-Square Test of Model Fit) is 0.
I can't calculate the number of observed information is 22.

How can I calculate observed information(one factor of the degrees of freedom related to model identification) in MSEM?
 Bengt O. Muthen posted on Wednesday, December 13, 2017 - 2:02 pm
We need to see the full output - send to Support along with your license number.

Also, clarify your last question - I don't know what "observed information" refers to.
 Vivian Vignoles posted on Tuesday, April 17, 2018 - 2:38 am
Hello,

I am re-running in Mplus7 some multilevel models that I previously ran in Mplus6.

For some reason, values of SRMR(between) for these models are coming out higher in Mplus7 than in Mplus6.

All other fit indices are identical.

The only change I have made to the input is to grandmean center two within-level predictors using "define" (Mplus7) instead of using the old "centering" command (Mplus6). But I have not changed what centering is used.

Can you explain why the SRMR(between) would be different across Mplus versions, and how their interpretation differs?

Many thanks in advance!
 Tihomir Asparouhov posted on Tuesday, April 17, 2018 - 9:38 pm
Please send your example to support@statmodel.com.
 Emily S Goering posted on Monday, December 03, 2018 - 12:54 pm
Hi,

I am running a two level complex with binary DV and integration type MONTECARLO.

I only get an AIC, BIC and Loglikelihood fit information. Why am I not receiving the other model fit indices?

Thanks, Emily
 Bengt O. Muthen posted on Monday, December 03, 2018 - 4:09 pm
Because you are not fitting a model to only the means, variances, and covariances of the variables. Raw data is used which means no conventional SEM overall fit index is available. You could try WLSMV which uses only such moments.
 Joanna Davies posted on Wednesday, December 11, 2019 - 5:05 am
In a CFA with 5 indicators (1 binary, 4 with 3 ordinal cats), n=910. I get the following fit:
x2: 9.094
df: 5
p=0.1054
RMSEA 0.030
CFI: 1.000
TLI:0.999
tech 4, correlation matrix all between 0.441-0.950

Im suspicious of the very good model fit. Do you think there is a problem with my model, or could it just be very good?

Thank you
 Bengt O. Muthen posted on Wednesday, December 11, 2019 - 3:44 pm
Your sample size is large but perhaps your sample correlations are small. Tech4 doesn't show the sample correlations but the model-estimated correlations.
 Joanna Davies posted on Thursday, December 12, 2019 - 1:47 am
ok, thank you.
The correlation matrix from sampstat gives corr between .423-.880.
The 4 ordinal vars have the highers corr .773-.880. The remaining binary var has lower corr .423-.460
Could it be the binary var that is causing a problem?
Or could it just be good model fit?
 Bengt O. Muthen posted on Thursday, December 12, 2019 - 5:51 am
Those sample correlations seem large enough from a power perspective in order to have a chance to reject the model. So it seems that the model truly does fit well. You can also check the bivariate fit statistics under TECH10.
 AMN posted on Wednesday, February 26, 2020 - 10:01 am
Hello,

I am running a multi-group linear growth curve model and I see that my CFI and TLI values match. I have not seen this before and was curious if this was indicative of a problem or if it happens. Below is my model fit output:

chi-square=9.724
df=6
p-value=0.1368
RMSEA=0.102
RMSEA 90% CI=[0, .214]
CFI=0.993
TLI=0.993
SRMR=0.037

Thanks!
 Bengt O. Muthen posted on Wednesday, February 26, 2020 - 4:37 pm
That's ok.
 Michael Halinski posted on Tuesday, July 14, 2020 - 6:46 am
Hello,

I'm receiving a parsing error in the following code (I've only included a segment of the code). What is a parsing error and how do I correct it? Thanks :-)

ERROR Error in parsing line: "Fr_SLS=(Fr1+Fr2+Fr3+Fr4)**2"


MODEL:
Fr BY Fr1* Fr2 Fr3 Fr4 (Fr1 – Fr4);
Coop BY Coop1* Coop2 Coop3 (Coop1 – Coop3);
Comp BY Comp1* Comp2 Comp3 (Comp1 – Comp3);
KnowledgeHide BY KH1* KH3 KH4 (KH1 KH3 KH4);
KnowledgeManp BY KMU3* KMO4 KMO7(KMU3 KMO4 KM07);
SDiff BY SDif1* SDif2 SDif3(StatFif1-SDif3);
Fr-SDiff@1;
Fr1-Fr4(Fr5-Fr8);
Coop1-Coop3(Coop4-Coop6);
Comp1-Comp3(Comp4-Comp6);
KH1 KH3 KH4 (KH5-KH7);
KMU3 KMO4 KM07 (KM8-KM10);
SDif1 SDif2 SDif3 (SDif4-SDif6);

MODELCONTRAINT:
NEW(Fr_REL, Fr_SLS, Fr_SEV);
Fr_SLS=(Fr1+Fr2+Fr3+Fr4)**2;
Fr_SEV=Fr5+Fr6+Fr7+Fr8;
Fr_REL=Fr_SLS/(Fr_SLS+Fr_SEV);
New(Fr_AVE, Fr_SSL);
Fr_SSL= Fr1**2+Fr2**2+Fr3**2+Fr4**2;
Fr_AVE=Fr_SSL/(Fr_SSL+Fr_SEV);
 Bengt O. Muthen posted on Tuesday, July 14, 2020 - 5:40 pm
We need to see your full output to diagnose this - send to Support along with your license number.
 Tunde Ogunfowora posted on Friday, October 30, 2020 - 5:46 pm
Hi,
I wish to show that my hypothesized model (X--> M --> Y) is a better fit to the data compared to the reverse model (Y --> M --> X). The reviewer wants us to test whether DIC (Bayesian multilevel) is better in the hypothesized model versus the reverse model. For reasons we cannot decipher, the number of parameters estimated in the hypothesized model is smaller (12) than in the reverse model (14). Mplus seems to estimate the means and variance of X in the reversed model but not in the hypothesized model. The results show that DIC is lower in the hypothesized model. However, if we estimate the mean/variance in the hypothesized model (ensuring the number of parameters are equal), this difference goes away.
Would you kindly help us understand why there are more parameters estimated in one model versus the other? The syntaxes are below:

Hypothesized Model:

%WITHIN%
AS_T2;

%BETWEEN%
Sp_Rmat on GEV_T1 (a);
AS_T2 on Sp_Rmat (b);
AS_T2 on GEV_T1 GPerf Gvoice_T1 ; ! GPerf Gvoice_T1 are controls
Sp_Rmat on GPerf Gvoice_T1 ;


Reversed model:


%WITHIN%
AS_T2;


%BETWEEN%

Sp_Rmat on AS_T2 (a);
GEV_T1 on Sp_Rmat (b);
GEV_T1 on AS_T2 GPerf Gvoice_T1;
Sp_Rmat on GPerf Gvoice_T1 ;


Thank you.
 Bengt O. Muthen posted on Saturday, October 31, 2020 - 1:52 pm
We needs to see the full output for both runs - send to Support along with your license number.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: