Message/Author 

Anonymous posted on Monday, June 25, 2001  11:57 am



We are running a multilevel model (Version 2)in which we are testing the stability of a measurement model over two time periods. The model converges, and we have a nonsignificant chisquare value with 17 degrees of freedom. However, our RMSEA is 0.000, CFI is 1.0, and TLI is 1.021. Can you provide some insight into why we would be getting these results? 

bmuthen posted on Monday, June 25, 2001  4:42 pm



Your RMSEA, CFI and TLI values suggest a very good model fit and chisquare does not disagree with that. 

Tom Munk posted on Thursday, November 17, 2005  12:19 pm



I am testing a multilevel SEM. MLR provides CFI, TLI, RMSEA, SRMR(b), and SRMR(w). But it also provides a warning against using chisquare difference tests. Can all of these fit indices be used with the same standards as a singlelevel SEM? A web search finds class notes from Newsome suggesting: >.95 for CFI and TLI <.08 for SRMR <.06 for RMSEA 


The studies used to come up with cutoffs for fit measures have not been based on multilevel analysis so may not be appropriate for these models. 


Hello Linda and Bengt, I'm wondering how do we determine a good fitting model in multilevel analysis. By looking at the output from Mplus user's guide example 9.9. Test of model fit are given as: TESTS OF MODEL FIT Loglikelihood H0 Value 6752.350 Information Criteria Number of Free Parameters 23 Akaike (AIC) 13550.700 Bayesian (BIC) 13663.578 SampleSize Adjusted BIC 13590.529 (n* = (n + 2) / 24) What are the cutoffs for these values? From what I understand the more negative the loglikelihood gets the better model fits. But is there a statistical test for this value? Can we transform it to a chisquare distribution? If yes, can we conduct a chisquare difference test between an unconditional model (no predictor at level two) and the target model? thanks in advance for your help, Pancho 

bmuthen posted on Wednesday, November 23, 2005  6:35 pm



For general multilevel models, no overall fit index has been developed. The usual indices are based on covariance matrix fitting and this is not necessarily relevant when as with random slope models the variance varies across subjects. This is why you don't see fit indices in multilevel programs. Instead you should do what most statisticians do, namely consider a sequence of nested models and get LR chisquare tests by 2 times the log likelhood difference. 


Just to make sure. I am being asked to report N for the chisquare (model fit index). Am I correct when I assume that in case of multilevel modeling, it is cluster size*number of individuals (number of observations in the output)? Thank you! 


In multilevel modeling the number of observations reported is the N. N is only the number of clusters if the unit of analysis is cluster. 


Thank you. As I am looking at whether individuals differentiate between different conditions, each individual forms a cluster. So, for the chisquare, I should report the number of clusters, and in my case, it is the number of individuals. Did I understand you correctly? 


I don't understand where your clustering comes in if you have one individual per cluster. 


We use multilevel modeling so that conditions within individuals form the within level (we are looking at variance between different conditions within individual) and individuals form the between level (examining variance between individuals across the conditions). 


N is the number of individuals and you have several members (conditions) per cluster. 


Yes, that is the case. So, I will report the number of individuals (clusters) for the chisquare. Thank you for your time. 


Dr. Muthen, I got the results after running example 9.6 in MPLUS user guide. I got the ChiSquare Test of Model Fit (3.864) and its df is 17. Q1. How does MPLUS calculate the df ?? I got CFI, TLI, AIC, BIC, RMSEA, SRMR. Q2. Are these fix indexes for overall model?? Q3. Why does MPLUS provide SRMR for Between and Within models, respectively? Could I get other fix indexes for Between and Within models? Many thanks, HsienYuan Hsu 


1. In this example, the sample statistics consist of 4 means for the y variables, 10 variances and covariances for the y variables on the within level, 8 covariances between the x and y variables, 10 variances and covariances for the y variables on the between level, 4 covariances between the w and y variables. This is a total of 36. There are 19 free parameters so there are 17 degrees of freedom. 2. Yes. 3. This is the only fit statistic that is provided for each part of the model. 


Prof Muthen, Suppose we have to choose between HLM2 and HLM3. Which test procedure should we use. Is there any model selection criterion for the HLM setup? We need to cite something similar to Hausman test, the test we use to select between fixed effect and random effect model (within panel data framework). Could we do the test in MPlus? Thanks and regards Sanjoy 


Perhaps you can settle the issue of how important level 3 clustering is by making a comparison of two runs. First use Type = Complex Twolevel where Complex deals with clustering on level 3 and Twolevel deals with clustering on level 2. Compare the SEs you get there with those of Type = Twolevel and ignoring the level 3 clustering. Mplus does not do Hausman testing. The choice between fixed and random effects is another, broader matter. 


Thank you Professor. I can see the point you made. Regards Sanjoy 

Joyce Kwan posted on Thursday, July 03, 2008  1:26 am



Dear Professors, I would like to ask if the interpretation of fit indices such as CLI, TLI, RMSEA for multilevel model the same as that for single level model. I read from the above that it may not be appropriate for us to use the cutoffs for fit measures that are used for single level models on multilevel models. So is there other rule of thumbs for using these fit indices for multilevel models. How do we use fit indices such as CLI, TLI and RMSEA for evaluation of model fit? Besides, I have fit a single level model and multilevel model for the same data set. The resulting TLI and RMSEA showed a great drop in model fit but the CLI remain more or less the same. Why would it be in this case? Thanks 


I do not know of a study where cutoffs have been studied for multilevel models. I would use those for single level models. I can't explain your findings in comparing a single level and multilevel model. 

Elif Çoker posted on Wednesday, May 27, 2009  8:36 am



Hi, My first question is which formula is used to calculate the loglikelihood and the concerned covariance matrices for multilevel path models in Mplus? Can you please give an exact reference? And lastly is there a new option to save the matrices in the normal exact dimensioned matrix format not like a mixed one saved disorderly? Thanks so much already, Elif 


See the following paper for random intercepts: Muthén, B. (1990). Mean and covariance structure analysis of hierarchical data. Paper presented at the Psychometric Society meeting in Princeton, NJ, June 1990. UCLA Statistics Series 62. You can download it from the following link where it is paper #32: http://www.gseis.ucla.edu/faculty/muthen/full_paper_list.htm See the following paper which is on our website for random slopes: Muthén, B. & Asparouhov, T. (2008). Growth mixture modeling: Analysis with nonGaussian random effects. In Fitzmaurice, G., Davidian, M., Verbeke, G. & Molenberghs, G. (eds.), Longitudinal Data Analysis, pp. 143165. Boca Raton: Chapman & Hall/CRC Press. 

Sally Czaja posted on Thursday, December 03, 2009  11:43 am



Hello. I'm trying to find out whether a whole group model or one with two groups better fits my data. Nested model syntax: USEVARIABLES ARE female raceWb ageint1 acrimyn poverty neighpov; CLASSES= c(2); KNOWNCLASS = c(grp=0 grp=1); WITHIN = female raceWb ageint1 poverty; CLUSTER = census; BETWEEN = neighpov; CATEGORICAL = acrimyn; ANALYSIS: TYPE= TWOLEVEL mixture; Model: %WITHIN% %OVERALL% acrimyn ON female raceWb ageint1 poverty; %BETWEEN% %OVERALL% acrimyn on neighpov; In the comparison model, everything is the same as above except for the following model specification. Model: %WITHIN% %OVERALL% acrimyn ON female raceWb ageint1 poverty; %c#1% acrimyn ON female raceWb ageint1 poverty; %c#2% acrimyn ON female raceWb ageint1 poverty; %BETWEEN% %OVERALL% acrimyn on neighpov; %c#1% acrimyn on neighpov; %c#2% acrimyn on neighpov; 1) Is my modeling approach correct. 2) I'm using the loglikelihood difference testing to compare the fit of the models. Is this correct? Are there any other ways comparing model fit? 3) If the loglikelihood difference test is not significant does that indicate that the nested model better explains the data than the comparison? Thank you. 


This sounds correct. If the constrained model does not worsen model fit, then the parameters are equal across groups. 

Murphy T. posted on Wednesday, October 19, 2011  12:15 am



Dear professors, I estimated a twolevel model and get the following fit indices for my model: RMSEA: 0.058 CFI: 0.967 TLI: 0.845 SRMR (within): 0.010 SRMR (between): 0.194 The RMSEA and CFI seem to look quite good (by conventional cutoff values), but the TLI and SRMR (between) seem to indicate poorer fit. What could be the reason for these discrepancies? Are you aware of cutoff values for these fit indices for multilevel models? Thank you very much! 


Lack of model fit can be caused by many problems. I don't know of any cutoffs specific to multilevel models. 

Eva posted on Wednesday, September 26, 2012  5:51 am



Would anyone happen to know if by now some guidelines have been supported in evaluating cutoff values for fit indices in multilevel SEM? 


You should post this on SEMNET or Multilevel net. They should know this. 


Dear Drs. Muthen, I am testing SEM model fit for 4 sequential, multiple mediation models. The fit index results I get with MPlus are all the same, howeverwhich is highly unanticipated. One example of a model is: UDO ON HS; SC ON HS UDO; PD ON HS UDO SC; Another is: SC ON UDO; HS ON UDO SC; PD ON UDO SC HS; These are very different models, yet I get the same fit index results for both. Is there something I'm missing in my syntax that should be used to indicate the sequence of mediations each model proposes? Thanks! 


Please send the two outputs and your license number to support@statmodel.com. 


I am testing a path model and receiving fit indices that appear unrealistically high (RMSEA=0, TLI/CFI=1) in model output that includes an error message saying, "THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES MAY NOT BE TRUSTWORTHY FOR SOME PARAMETERS DUE TO A NONPOSITIVE DEFINITE FIRSTORDER DERIVATIVE PRODUCT MATRIX. THIS MAY BE DUE TO THE STARTING VALUES BUT MAY ALSO BE AN INDICATION OF MODEL NONIDENTIFICATION. THE CONDITION NUMBER IS 0.213D16. PROBLEM INVOLVING PARAMETER 226." I am using MLR estimation with survey weights and clustered standard errors to account for nested sampling design (children within schools). I believe the error message is due to the fact that I have dichotomous covariates, which I allow to covary for the purposes of the FIML approach to missing data. My sample size is over 16,000 and I have no latent variables, so I believe this is not a model identification problem. When I remove parameter 226, I get the same error message for another covariance between two dichotomous covariates. I have also experimented with setting numerous other covariate paths to zero, but the fit indices and error message remain the same (except for the parameter number). So, should I assume that I have a model with excellent fit and ignore the error message. Or is there some other alternative? Thank you, Aleksandra 


Remove the WITH statements involving the dichotomous covariates. If the message disappears, you can put the statements back and ignore the message. It is triggered because the mean and variance of a dichotomous variable are not orthogonal. 


Hi Linda, Yes, when I remove the WITH statement for the covariates, the error message goes away. Thank you for that suggestion! However, when I remove that WITH statement, I still have perfect fit statistics (RMSEA=0, CFI/TLI=1). That seems implausible to me. It is really possible for an empirical model to have perfect fit? Could this be caused by shared method variance? The data for independent and mediator variables was gathered via survey from a single respondent, i.e. the mother of each child. The dependent variables are direct assessments of children's literacy and numeracy skills. Thank you for any insight you can provide. Best, Aleksandra 


Your model must have zero degrees of freedom to get those values. 


Actually, the model has 2 degrees of freedom. 


So, to clarify, my question is: How is it possible for a model with 2 degrees of freedom to have perfect fit statistics? Thank you! 


Please send the output and your license number to support@statmodel.com. 

Yao Wen posted on Thursday, February 13, 2014  11:26 am



Hi Linda, I ran a crossclassified model using Bayesian estimator. I found no model fit indices were reported in the output. I attached part of my syntax below. ANALYSIS: TYPE = crossclassified random; ESTIMATOR = BAYES; PROCESSORS = 2; CHAINS=2; BITERATIONS = (20000); MODEL: %WITHIN% s1s4  lit_w by y1y4 ; lit_w on hisp; y1y4; lit_w; [lit_w@0]; %BETWEEN cl2% s1s4 ; %BETWEEN tid% s1s4 ; OUTPUT: TECH1 TECH8 TECH4 TECH10 STANDARDIZED SVALUES; I received warning messages below. *** WARNING in OUTPUT command STANDARDIZED (STD, STDY, STDYX) options are not available for TYPE=RANDOM. Request for STANDARDIZED (STD, STDY, STDYX) is ignored. *** WARNING in OUTPUT command TECH4 option is not available for TYPE=RANDOM. Request for TECH4 is ignored. *** WARNING in OUTPUT command TECH10 option is only available with categorical or count outcomes. Request for TECH10 is ignored. Is there a way to obtain model fit indices in this case? Thank you for your time! 


That has not been developed yet. 

Ellen posted on Saturday, June 28, 2014  12:49 am



I was running a multilevel path analysis with binary variables(mediator) and to use MLR estimator. I also used Type=complex twolevel random. I have some questions about the model. 1. I was not getting regular fit indices(chisqare, CFI, TLI, RMSEA), only reported AIC, BIC. I wonder if I can get chisquare and other fit indices in for the fitted model. 2. I'd like to compute marginal effects of indirect effect. The model is as follows. Y on M X M on X M is binary, Y is continuous variable. Generally when compute margianl effect of binary variable, we multiply unstandardized coefficient by (1 mean of latent variable). For the marginal effects of indirect effect, do we have to use general method or other ways? 


1. These are not available with Type=Random because a random slope implies that the DV variance changes over observations so that there isn't a single covariance matrix to test. 2.This is a big and complex topic that is complicated by the binary mediator and the twolevel model with Type=Random. My mediation papers on our website deal with the first issue and our Topic 7 handout and video deals with the second issue. I am not aware of the approach of that multiplication you mention. 


Hi Profs . Muthen My fit indicies for one of my models is as follows (multilevel  with only moderating variable at level 2; interacting variable at level 1) RMSEA (Root Mean Square Error Of Approximation) RMSEA 0.125 CFI 0.825 TLI 0.703 ChiSquare Test of Model Fit for the Baseline Model Value 1036.788 Degrees of Freedom 78 PValue 0.0000 SRMR (Standardized Root Mean Square Residual) Value for Within 0.196 Value for Between 0.000 1. Is there any empirical reference you can provide with respect to assessing fit of a mujltilevel model? 2. Is there anything i can do to improve this fit? 


You may want to ask this general analysis question on SEMNET. You need to show the full input for the model. Also include the chisquare fit for the model. 


Ok Dr. Muthen, Certainly, thank you. Regards 

Qiao Hu posted on Wednesday, November 25, 2015  6:51 am



Do there have any cutoffs for pppvalue of the model fit in BSEM? 


Not really, but see the papers on our website: Asparouhov, T. & Muthén, B. (2010). Bayesian analysis of latent variable models using Mplus. Technical Report. Version 4. Click here to view Mplus inputs, data, and outputs used in this paper. download paper contact second author Asparouhov, T. & Muthén, B. (2010). Bayesian analysis using Mplus: Technical implementation. Technical Report. Version 3. download paper contact second author 

May Lee posted on Tuesday, November 15, 2016  10:55 am



Hi Professors, I was running level 1 model with nested data using type=twolevel analysis(level 2 only has 21 clusters).And the MODEL FIT INFORMATION is below: Number of Free Parameters 22 Loglikelihood H0 Value 269.424 H0 S 1.5769 for MLR H1 Value 269.432 H1 Sc 1.5769 for MLR Information Criteria Akaike 582.849 Bayesian 661.184 SampleSize Adj 591.435 (n* = (n + 2) / 24) ChiSquare Test of Model Fit Value 0.000* DF 0 PValue 1.0000 SCF 1.0000 for MLR RMSEA Estimate 0.000 CFI/TLI CFI 1.000 TLI 1.000 ChiSquare Test of Model Fit for the Baseline Model Value 30.388 DF 19 PValue 0.0471 SRMR Within 0.004 Between 0.000 My question is "ChiSquare Test of Model Fit" value=0 p value=1.Does this mean this model is so bad? How can I deal with it? Thanks! 


When you have zero degrees of freedom the model is saturated/justidentified and a test of model fit is not available. 

May Lee posted on Tuesday, November 15, 2016  10:13 pm



Thank you Bengt. 


Hello, I am testing a SEM with at least one categorical dependent variable. I have used the WLSMV estimator and my results are as folllows: chisquare(376, N = 865) = 987.996, p < .01, CFI = 0.828, RMSEA = .043. The CFI value indicates that my model does not fit the data well but the RMSEA seems to indicate that it does. My model is complex (one latent variable and 23 observed variables) and I am wondering if the CFI is not the best indicator of model fit to use in this context. Also, my data is nonnormal and I am wondering if this could affect the fit statistics.‎ 


Look at modification indices to see if the model can be improved. 

Sophie Dan posted on Wednesday, April 26, 2017  7:24 am



Dear Dr. Muthen, If I just do a between level EFA, the model fit cannot be accepted, but when do the twolevel EFA together with within level, the model fit is acceptable, can I just use the twolevel EFA directly? Can the poor model fit when doing between level EFA seperately due to a limited number of cluster? (For example, with 13 variables but only 45 clusters?) If the cluster number is limited, even the twolevel(within+between) result is not trustworth? Thanks! 


See my general answer. 

Min Zhang posted on Thursday, May 25, 2017  12:55 pm



Dear Dr Muthen, I am running a path analysis model with ordinal categorical variables. This is also a multiple group analysis. I am worried about my model fit. RMSEA (Root Mean Square Error Of Approximation) Estimate 0.048 90 Percent C.I. 0.046 0.051 Probability RMSEA <= .05 0.828 CFI/TLI CFI 0.898 TLI 0.849 1. I wonder why my CFI is too low. I understand that CFI is a ratio between null model and proposed model and that a low CFI may indicate high correlation between variables. I used the option modindices but none of them fits my theory. Could you please suggest how I can improve this model? 2. I did not use latent factor modelling. This is merely a Path Analysis with ordinal categorical variables. Should I even be worried about the model fit? I think a difftest may be more reasonable to indicate explanatory power of specific variables. Many thanks for your time. Regards, Min 


This is a good set of questions for SEMNET. 


Hi, When I run a two level model where I'm just predicting a random intercept I get the standard fit indices (RMSEA, CFI, TLI, SRMR ect...). But when I run a two level model with a cross level interaction, so I am predicting a random intercept and a random slope, I don't get these fit indices. Reviewers want us to report fit indices, is there anything I can report for these analyses? (note. the predictors in all analyses are modelled as latent variables as X by X1, X2, X3, X4. So the models are two level SEMs.) Thanks. 


Chisquare and related fit statistics are available only when means, variances, and covariances are sufficient statistics for model estimation. This is not the case with TYPE=RANDOM. 

Nik Schulte posted on Tuesday, November 07, 2017  1:44 am



Dear Ms Muthen what is the correct interpretation of the SRMR between and the SRMR within in the output of multilevel SEMs? Many thanks in advance! 


We use formula (128) in http://statmodel.com/download/techappen.pdf applied to the within level and the between level separately as if they are two separate groups. For illustration  run User's Guide example 9.6 with the additional output option: output:residual. You will find in that output option that residuals for the covariance parameters are produced separately for the within and the between level and those are the basis (on correlation scale however) for the two SRMRs. The model estimated within and between variance covarance matrices are compared to the unconstrained twolevel model. The two SRMRs allow you to evaluate the model fit separately for the two levels. 

MS, Kim posted on Wednesday, December 13, 2017  7:49 am



Hello. I have a question about the parameters number of unrestricted model in MSEM. For example, There are 2 observed variables(y1g, y2g) in the within level and 1 observed variable(z) in the between level in the model including means. How can I calculate observed information(one factor of the degrees of freedom related to model identification) in MSEM? 

MS, Kim posted on Wednesday, December 13, 2017  8:20 am



Hello. I have a question about the parameters number of unrestricted model in MSEM. For example, There are 2 observed variables(y1g, y2g) in the within level and 1 observed variable(z) in the between level. usevar=y1g y2g z; between=z; model: %between% y2g on y1g z; y1g on z; %within% y2g on y1g; in this case, Number of Free Parameters is 22. And Degrees of Freedom (under ChiSquare Test of Model Fit) is 0. I can't calculate the number of observed information is 22. How can I calculate observed information(one factor of the degrees of freedom related to model identification) in MSEM? 


We need to see the full output  send to Support along with your license number. Also, clarify your last question  I don't know what "observed information" refers to. 


Hello, I am rerunning in Mplus7 some multilevel models that I previously ran in Mplus6. For some reason, values of SRMR(between) for these models are coming out higher in Mplus7 than in Mplus6. All other fit indices are identical. The only change I have made to the input is to grandmean center two withinlevel predictors using "define" (Mplus7) instead of using the old "centering" command (Mplus6). But I have not changed what centering is used. Can you explain why the SRMR(between) would be different across Mplus versions, and how their interpretation differs? Many thanks in advance! 


Please send your example to support@statmodel.com. 


Hi, I am running a two level complex with binary DV and integration type MONTECARLO. I only get an AIC, BIC and Loglikelihood fit information. Why am I not receiving the other model fit indices? Thanks, Emily 


Because you are not fitting a model to only the means, variances, and covariances of the variables. Raw data is used which means no conventional SEM overall fit index is available. You could try WLSMV which uses only such moments. 


In a CFA with 5 indicators (1 binary, 4 with 3 ordinal cats), n=910. I get the following fit: x2: 9.094 df: 5 p=0.1054 RMSEA 0.030 CFI: 1.000 TLI:0.999 tech 4, correlation matrix all between 0.4410.950 Im suspicious of the very good model fit. Do you think there is a problem with my model, or could it just be very good? Thank you 


Your sample size is large but perhaps your sample correlations are small. Tech4 doesn't show the sample correlations but the modelestimated correlations. 


ok, thank you. The correlation matrix from sampstat gives corr between .423.880. The 4 ordinal vars have the highers corr .773.880. The remaining binary var has lower corr .423.460 Could it be the binary var that is causing a problem? Or could it just be good model fit? 


Those sample correlations seem large enough from a power perspective in order to have a chance to reject the model. So it seems that the model truly does fit well. You can also check the bivariate fit statistics under TECH10. 

AMN posted on Wednesday, February 26, 2020  10:01 am



Hello, I am running a multigroup linear growth curve model and I see that my CFI and TLI values match. I have not seen this before and was curious if this was indicative of a problem or if it happens. Below is my model fit output: chisquare=9.724 df=6 pvalue=0.1368 RMSEA=0.102 RMSEA 90% CI=[0, .214] CFI=0.993 TLI=0.993 SRMR=0.037 Thanks! 


That's ok. 


Hello, I'm receiving a parsing error in the following code (I've only included a segment of the code). What is a parsing error and how do I correct it? Thanks ERROR Error in parsing line: "Fr_SLS=(Fr1+Fr2+Fr3+Fr4)**2" MODEL: Fr BY Fr1* Fr2 Fr3 Fr4 (Fr1 – Fr4); Coop BY Coop1* Coop2 Coop3 (Coop1 – Coop3); Comp BY Comp1* Comp2 Comp3 (Comp1 – Comp3); KnowledgeHide BY KH1* KH3 KH4 (KH1 KH3 KH4); KnowledgeManp BY KMU3* KMO4 KMO7(KMU3 KMO4 KM07); SDiff BY SDif1* SDif2 SDif3(StatFif1SDif3); FrSDiff@1; Fr1Fr4(Fr5Fr8); Coop1Coop3(Coop4Coop6); Comp1Comp3(Comp4Comp6); KH1 KH3 KH4 (KH5KH7); KMU3 KMO4 KM07 (KM8KM10); SDif1 SDif2 SDif3 (SDif4SDif6); MODELCONTRAINT: NEW(Fr_REL, Fr_SLS, Fr_SEV); Fr_SLS=(Fr1+Fr2+Fr3+Fr4)**2; Fr_SEV=Fr5+Fr6+Fr7+Fr8; Fr_REL=Fr_SLS/(Fr_SLS+Fr_SEV); New(Fr_AVE, Fr_SSL); Fr_SSL= Fr1**2+Fr2**2+Fr3**2+Fr4**2; Fr_AVE=Fr_SSL/(Fr_SSL+Fr_SEV); 


We need to see your full output to diagnose this  send to Support along with your license number. 


Hi, I wish to show that my hypothesized model (X> M > Y) is a better fit to the data compared to the reverse model (Y > M > X). The reviewer wants us to test whether DIC (Bayesian multilevel) is better in the hypothesized model versus the reverse model. For reasons we cannot decipher, the number of parameters estimated in the hypothesized model is smaller (12) than in the reverse model (14). Mplus seems to estimate the means and variance of X in the reversed model but not in the hypothesized model. The results show that DIC is lower in the hypothesized model. However, if we estimate the mean/variance in the hypothesized model (ensuring the number of parameters are equal), this difference goes away. Would you kindly help us understand why there are more parameters estimated in one model versus the other? The syntaxes are below: Hypothesized Model: %WITHIN% AS_T2; %BETWEEN% Sp_Rmat on GEV_T1 (a); AS_T2 on Sp_Rmat (b); AS_T2 on GEV_T1 GPerf Gvoice_T1 ; ! GPerf Gvoice_T1 are controls Sp_Rmat on GPerf Gvoice_T1 ; Reversed model: %WITHIN% AS_T2; %BETWEEN% Sp_Rmat on AS_T2 (a); GEV_T1 on Sp_Rmat (b); GEV_T1 on AS_T2 GPerf Gvoice_T1; Sp_Rmat on GPerf Gvoice_T1 ; Thank you. 


We needs to see the full output for both runs  send to Support along with your license number. 

Back to top 