Message/Author 


Dear Linda, I have run a unit level analysis and found the results are so much different by applying the analysis: Type = Complex. I have used the same specification one with this command line and the other without. I found the model fit indices (esp. RMSEA) from the two runs are glaringly different. When applying Analysis: Type = Complex; the fit indices gave: RMSEA .032 CFI .923 TLI .914 When removing the analysis command line; the fit indices gave: RMSEA .093 CFI .897 TLI .889 In my analysis, I also used Cluster = location; Stratification = region; I would like to know why this is the case. Which one is more appropriate given the nature of my study? Thanks. Pat 


If you have nested data and the results differ when you use COMPLEX, then you should use COMPLEX. When you do not use COMPLEX, chisquare is inflated giving a worse RMSEA than when you take the nested nature of your data into account. 


Thanks for the advise. But, when I use COMPLEX, I got Probability RMSEA <= .05 equal 1.000. Whereas when I do not use COMPLEX, probability RMSEA is 0.000. What does this mean? Is the result from the model using COMPLEX still appropriate? Thanks. 


This agrees with what is expected. With complex you should obtain a better RMSEA which you do. You should use COMPLEX with your data. 

Luo Wenshu posted on Monday, December 05, 2011  12:22 am



Dear Linda, I am testing a SEM model with 13 latent variables and 2 observed covariates. There are 3 variables with large ICC (>.10) (cluster size about 15). I tried estimating a twolevel measurement model, but the estimation did not converge. Two ways were tried from here. One is to use composite scores to run a twolevel path model.By specifying the same free paths across two levels I got the informaiton: THE STANDARD ERRORS OF THE MODEL PARAMETER ESTIMATES MAY NOT BE TRUSTWORTHY FOR SOME PARAMETERS DUE TO A NONPOSITIVE DEFINITE FIRSTORDER DERIVATIVE PRODUCT MATRIX. THE NONIDENTIFICATION IS MOST LIKELY DUE TO HAVING MORE PARAMETERS THAN THE NUMBER OF CLUSTERS. However, I think it is difficult to reduce the number of parameters to be less than the number of clusters and I am not very interested in the model at second level. The other way is to use TYPE = COMPLEX to consider the dependence of the data with each cluster when estimating the parameters at the first level. Rather than using the composite scores, I specified both a measurement model and a structural model. The model has a good fit, but I got similar warming information as obtained in the first analysis. Can I trust the result from the analysis in the two ways, especially the second? 

Li Lin posted on Monday, December 05, 2011  12:01 pm



Hi I'm running a twolevel SEM model with dichotomous outcomes. I would like to get the BLUP for u. For example, a blup is E(u) = GZ'V(1)(y  Xb). How can I get the BLUP in Mplus? Does Mplus give the variance components? 


Luo: You have nonindependence of observations at the individual level. Only at the cluster level do you have independence of observations. The impact of having more parameters than clusters has not been well studied but this may affect your results. The only way to know the impact is to do a Monte Carlo study similar to your own study and see the impact on parameter estimates, standard errors and fit statistics. 


Li: Mplus does not provide BLUP. 

Luo Wenshu posted on Monday, December 05, 2011  7:32 pm



Hi Linda, Thank you for your reply. My data actually have three levels (students, classes and schools). In Mplus, if I run SEM with type = complex, I can only treat the data as having two levels (classes as the second level units).Is it OK to ignore the school level for this analysis? By setting type = complex, will the results of SEM analysis be the same if the data can be treated with three levels? Indeed,I found that the level 3 variance + level 2 variance in threelevel variance decomposition analysis = level 2 variance in twolevel variance decomposition analysis. 

Luo Wenshu posted on Tuesday, December 06, 2011  1:19 am



Hi Linda, When we set type=complex to run SEM, we can obtain a latent correlation matrix if we have tech4 following the OUTPUT command. Is this correlation matrix the pooled within groups latent correlation matrix or a correlation matrix based on the disaggregated data? Thank you very much. 


It sounds like the school variance is small so it could perhaps be ignored. If you don't want to ignore it, you can use TYPE=COMPLEX TWOLEVEL. See the introduction to Chapter 9 and pages 500501 in the user's guide. TECH4 is not the pooledwithin matrix. See the SAMPLE option of the SAVEDATA command for pooledwithin. 

Luo Wenshu posted on Tuesday, December 13, 2011  10:37 pm



Hi Linda, I am testing a complex mediation model with multiple independent variables, multiple mediators, and multiple outcomes. I can obtain the indirect effects and their signficance test in Mplus. I found in Mplus output the size of the indirect effect is the product of the path coefficent from X to M (a) and from M to Y (b), and the standard error can be obtained by using Sobel's formula. Could you please let me know how the significance test is conducted? Is it based on the ratio of ab/SE assuming normal distribution? From MacKinnon's articles, the product usually does not follow a normal distribution. When the sample size is large, does the distribution approach normality? I know we can use Bootstrap method to test signficance. However I use TYPE = COMPLEX and Bootstrap cannot be run for this type of analysis. 


Note that we use the Delta method standard errors. There is a FAQ on the website that discusses the difference. Yes, when the sample size is large, the distribution does approach normality. Indirect effects are generally not that nonnormal. 


Hi Linda (or anyone else answering here) I have relatively complex, longitudinal data with students clustered in classrooms. Repeatedly I find effects (with low pvalues) if I  do a single level analysis  or use a sandwich estimator with type = complex But modelling the longitudinal model as twolevel inflates pvalues (or gives wide CIs with Bayesian). Intraclass correlations vary from quite moderate (e.g. .01) to relatively high (e.g. .05 or higher), N is anywhere between 400 and 1400. I am for the time being not so interested in the Between level, but think I need to account for the interdependence of observations. Trouble is, COMPLEX and TWOLEVEL give me very different conclusions for the part I am interested in. Am I correct in believing that simulation studies indicate TWOLEVEL is more reliable than COMPLEX? A yes to this question is unfortunate as far as my analyses are concerned... I hoped you might be able to comment on this. Best, Chris 


A growth model is what is called a disaggregated model. You should use TWOLEVEL not COMPLEX for disaggregated models. 

babs posted on Monday, April 23, 2012  6:46 am



Hello, I am new to Mplus, so please forgive me my beginner questions. I am considering the following situation: 538 customers are nested with 195 sales representatives who are nested within 12 companies. As the model type I chose "twolevel complex" however, I am not sure whether I need a random slope in my case. My model looks like this so far: cluster=Company SalesREP; analysis: type=twolevel complex; model: %within% Cw by y1y2; !Customer Loyalty %between% Ab by x1x4;! salesREP'CustomerOrientation Bb by z1z4; ! salesREP' Job Satisfaction Cb by y1y2; !CustomerLoyalty Cb on Ab Bb; But I also would like to include an effect "Cw on Ab Bb" how can I do this? And how would the syntax look like in this case? I am not sure how to transfer the user's guide examples for random slopes to my case. Furthermore, is the command cluster=company salesRep suitable if I want to take into account the 3rd level (companies)? Thank you so much for your help! 


Please note that 12 companies are not enough for COMPLEX or TWOLEVEL. A minimum of 3050 is recommended. Instead of using COMPLEX, include 11 dummy variables in the analysis. You can't include Cw ON Ab Bb. The most you can do is create a customer loyalty factor using the between part of y1 and y2 as you have done. 


Hello, I have a similar setting as "Luo Wenshu posted on Monday, December 05, 2011  12:22 am". I am using Type=Complex and get the same error information, as I have more parameters than clusters (16 clusters), but I am not able to reduce the parameters. I just want to control for the second level as my estimated intraclass correlations suggest that I have nested data, but I am actually just interested in the level 1 structure. Linda refers to use a Monte Carlo study in order to see the impact on the parameter estimates, standard errors and fit statistics. Can you please specify what I would have to do? Thank you for your help! 


You can look at examples 12.6 and 12.7 in the UG. Typically, you don't have to worry about this. Note, however, that 16 clusters is a bit too small for type=complex standard errors to be well estimated. At least 20 is the rule of thumb. You can also try Bayes where a smaller number of clusters can be handled, although type=complex is not available with Bayes but instead you would have to do twolevel analysis. 

Back to top 