Message/Author 

tommy lake posted on Monday, June 11, 2007  1:31 pm



Dear Linda, If I estimate a twogroup SEM: X on Y Y on Z I estimate this model in three steps: (1) constain all parameters constant (2) relax constraints on structural coefficients (3) relax constraints on coefficients and factor loadings. Then I conduct a Chisquare difference test, finding that the three models are significantly different from each other, and (3) fits best. My question is: if I report model (3), can I say the coefficients of X on Y is significantly different cross the two groups? And, can I say the indirect effects from Z to X is significantly different cross the two groups? Thanks a lot! 


You mention factor loadings so I assume that x, y, and z are latent variables. If so, you should first establish measurement invariance before testing structural parameters. How to do this is discussed in Chapter 13 of the Mplus User's Guide available on the website at the end of the multiple group discussion. This is also shown in the Day 1 handout along with how to test for structural parameter differences. 

tommy lake posted on Tuesday, June 12, 2007  11:54 am



Dear Linda, Sorry I did not make it clear enough. Let's assume X and Y are latent variable, and Z is not. I have established measurement invariance by constraining all parameters equal across the two groups. After a Chisquare difference test, I found when relaxing the constraints on coefficients and factor loadings, the model fits best. I understand this means the coefficients and factor loadings, as a WHOLE, are significantly different across groups. My question is: Based on such information, can I say one INDIVIDUAL coefficient, from Y to X, is also significantly different cross the two groups? Can I say one individual indirect effect, from Z to X, is significantly different cross the two groups? Thanks 


Measurement invariance is established by looking at measurement parameters  intercepts and factor loadings in most cases. If these are not the same for both groups, then you do not have measurement invariance. Only after establishing measurement invariance, would one compare structural parameters  means, variances, covariances, and regression coefficients of the factors. One would not constrain both measurement and structural parameters equal at the same time to test measurement invariance. 

tommy lake posted on Wednesday, June 13, 2007  1:32 am



Dear Linda, Sorry for my confusion about the concepts. I reread Chapter 13 of the Mplus User's Guide and tried several model tests. Yet I still got problems. My model is: f1 by y1 y2 y3; f2 by u4 u5 u6; f1 on f2 x1; f2 on x2; It is estimated in two groups (female and male). My purpose is to compare the coefficient from x2 to f2 across groups. As you suggested, I first test the measurement invariance of the two latent variables, f1 and f2. Then my question is: 1) should I test the two factors as in the above model, or test them separately (without ON statements)? I tried both but not sure which one is correct. 2) I found measurement noninvariance of f1 and f2. Does that mean I have no way to compare structure parameters? Is it possible to fix this problem? 3) In other questions I found you said: "Chisquare difference testing can be used to test the significance of any parameter. You just run a model where the parameter is held equal across groups and another model where the parameter is free across groups. " Can I use this method to test the coefficient from x2 to f2 across groups, even though there are measurement noninvariances? Thank you very much for your nice help! 


1. I would do this without ON. 2. If you don't have measurement invariance, it means that the factor is not the same for both groups so it does not make sense to compare the structural parameters. 3. You can do this but it would be hard to justify its meaning. 

tommy lake posted on Wednesday, June 13, 2007  1:10 pm



Dear Linda, Thanks a lot! I am much clearer about the process now. I will reformulate the factors to see if I can get measurement invariance. I have a followup question. In the above model, if we have measurement invariance, and if we know the coefficients from x2 to f2 and from f2 to f1 are significantly different across groups, can we say the indirect effect from x2 to f1 is also significantly different across groups? If not, how do we compare indirect effects across groups? 


No. You would need to test the indirect effect using MODEL CONSTRAINT. 

sunyforever posted on Wednesday, June 20, 2007  4:03 pm



Prof. Muthen, I have a question related to the above discussion. Suppose I have a model: f1 by y1 y2 y3; f2 by u4 u5 u6; f1 on f2 x1 x2; f2 on x1 x2; We can see x1 and x2 influence f1 both directly and indirectly through f2. My question is: can I compare the total effects of x1 and x2 on f1? Can I use the coefficients of the total effects to argue for one of x1 and x2 is more influential? Thanks 

sunyforever posted on Wednesday, June 20, 2007  4:13 pm



In addition, I have got all the indirect effects and thus total effects with "model indirect". I just don't know how to compare them. To see their pvalues, or structural coefficients, or standardized coefficients? Thanks 


If the two x variables are on the same scale it would make sense to compare their total effects (without standardization). Model Indirect also gives standardized values so that total effects are expressed using unit x variance, making them comparable. 

sunyforever posted on Thursday, June 21, 2007  11:43 am



Prof. Muthen, Thanks for the quick answer. What if I estimate the above model in two groups (by gender), can I compare the total effects across groups? Assume I have measurement invariance and the x variables are on the same scale. Should I compare their unstandardized coefficients or standardized ones? Should I test they are significantly different or not? 


I would urge acrossgroup comparisons to be made using unstandardized coefficients. Different groups may have different covariate variances so that standardized values differ across groups even when the unstandardized do not. Unstandardized coefficients are more likely to be invariant. These are classic arguments in SEM. 


Prof. Muthen, If the amount of two total effects are close, do I need to test their difference? I know how to test the difference between direct effects (with Chisquare difference test), but I am not clear how to handle indirect effects and total effects. In the above model, how can I test if the total effects of x1 and x2 on f1 is significantly different? Could you drop me several lines of commands as an example? thanks 


You can do this using Model Test, which is Wald chisquare testing. In Model, you give labels to the 3 slopes involved, e.g. y on m (p1); m on x1 (p2); m on x2 (p3); y on x1 (p4); y on x2 (p5); In Model test you use total1=p1*p2+p4; total2=p1*p3+p5; total1=total2; 

sunyforever posted on Wednesday, June 27, 2007  10:27 am



Prof. Muthen, Thank you for the detailed instruction. I tried to run this model test in a twogroup SEM, but always got the error message: "Unknown group name TEST specified in groupspecific MODEL command." How can I resolve this problem? Also, can I use this method to test the difference of total effects across groups? Many thanks 


Please send your input, data, output, and license number to support@statmodel.com. 


I have a similar issue (wanting to compare parameters across groups). What I would like to be able to test differences between parameters in much the same way that it can be done in a linear model. From the following: ... G1 acadach ON hrs2_int*.02(p1); ... G2 acadach ON hrs2_int*.02(p9); ... G3 acadach ON hrs2_int*.02(p17); ... MODEL TEST: P1 = P9; P9 = P17; I get: Wald Test of Parameter Constraints Value 2.234 Degrees of Freedom 2 PValue 0.3273 Is this comparable to a multidf Ftest from a linear model? In order to test specific contrasts, do I have to run the model with JUST each contrast in the MODEL TEST statement? Thanks. 


No. Yes. 


MODEL TEST: P1 = P9; However, the above is equivalent to a 1df contrast, no? 


Yes. 


Hi Linda and Bengt: I have tested for and achieved partial invariance in my measurement model using CFA for the latent variables alone. I have added my covariates and wish to test for structural invariance (factor means, variances, covariances, and regression coefficients). I have found very little in the literature about the best way to do this. I have queried SEMNET and reviewed their archives without success. Can you refer me to any references as to how best to move forward? Are there recommendations similar to what is in the UG related to measurement invariance in regards to order? Can I perform the analysis using chisquare difference testing? Thank you. Sue ALSO: when I connect to www.statmodel.com I am getting a message from Norton AV indicating that a virus was blocked, specifically trojan.asprox. This has been happening for several days. 


We discuss this extensively in our "Topic 1" and "Topic 2" of the 8part Mplus Short Course series. For a web video of Topics 1 and 2, see our home page under New Mplus Web Videos. This also provides handouts. The home page also has links to information on all of our 8 topics  with Topic 3 and Topic 4 coming up next week at Johns Hopkins University. 


Thank you Bengt. Sue 


Dear MplusTeam, given the situation that X impacts Y directly and indirectly through a mediator M, I have a question on comparing the total effects across groups (WLSMV estimation). I use the following statements to compare 2 groups at a time in my 3 group scenario: Group1: Y on X (a1); M on X (a2); Y on M (a3); Group2: Y on X (b1); M on X (b2; Y on M (b3); Group3: Y on X; M on X; Y on M; MODEL TEST: 0 = (a1+a2*a3)(b1+b2*b3); ! The following statement did not work that Benqt gave on June 24, 2007 did not work for me !total1=a1+a2*a3; !total2=b1+b2*b3; !total1=total2; ! This resulted always in an error message: "A parameter label or the constant 0 must appear on the lefthand side of a MODEL TEST statement." Is this approach correct? Secondly, is the WaldTest applicable with WLSMVestimation as I generally use the DIFFTEST test option to compare models estimated with WLSMV? Thanks in advance, Majom 


Yes, this looks correct. Yes, the Wald test is applicable also with WLSMV estimation. 


Thanks a lot, Bengt. As I have quite some parameter contraints to test, I wonder if there is an option to include several MODEL TEST statements at once. For example: MODEL TEST 1: x_group1 = x_group2 MODEl TEST 2: x_group1 = x_group3 MODEl TEST 3: x_group2 = x_group3 Is there any work around? Or do I have to test each effect in a separate model estimation (In this would be more than 100 model tests)? Thanks in advance. 


There is not such an option. 

Dave posted on Tuesday, October 25, 2011  9:50 am



I have found a significant interaction in a twostep mediation model like X > M1 > M2 > DV with the interaction predicting M1. The interaction is between X and a second variable (IV1). Other points to note, X, M1 and M2 are latent variables, DV is a 0/1 binary variable. I am trying to figure out how to interpret the interaction and I am would appreciate any guidance you can provide. I have considered splitting the sample into groups (low/high) on the moderator and estimating the mediated model for each group. I tried this and the indirect effect appears to be different across low and high groups. Does it makes sense to use the multigroup analysis approach to test for a significant difference in the indirect effects? Reading the earlier posts I believe I could do this using constraints. Also, does the presence of the significant interaction change the need to show invariance across groups prior to using the multigroup approach to test differences in the indirect effect? The model: Usevariable are X1 X2 X3 X4 X5 M11 M12 M13 M21 M22 M23 M24 M25 IV1 DV Control; Categorical is DV; Missing is .; ANALYSIS: TYPE IS Random; MODEL: int  F1 xwith IV1; F1 By X1 X2 X3 X4 X5; F2 By M11 M12 M13; F5 By M21 M22 M23 M24 M25; F2 on F1 IV1 int; DV on F1 F5 Control; F5 on F2; 


I think you are dichotomizing your IV1 covariate to get a 2group analysis. That's fine if you don't think you lose too much information. The 2group analysis highlights the usual assumption of all parameters being the same at these low and high values. The significant interaction is only part of what's assumed invariant, such as residual variances. But why abandon the XWITH approach? Interpreting an interaction effect uses the same thinking as in regression (see e.g. the AikenWest book). 

Xu, Man posted on Monday, March 19, 2012  4:10 pm



Could I just follow up on this thread. I have a two group SEM model too, with measurement model constrained to be equal across groups. The structural paths are the key points of difference testing. I tried the MODEL TEST and specified the two sets of structural coefficients (say 2*n) to be equal, but I found the output only gave an overal wald test given the number of Parameter Constraints. Is there a way to get the wald test for each parameter please? Or do I have to manually create n number of models to test each constraint individually? I also tried setting structural parameters to be equal across groups in the model part, and looked at MI indices, but it was not very obvious to me which paths need to be freed. Thanks! 


If modification indices don't guide you, you would need to do each test separately. 

Xu, Man posted on Tuesday, March 20, 2012  2:07 am



Thank you, Linda. Or maybe I am not very good at using the MI indices. I will watch some relevant Mplus teaching films on measurement invariance (I beleive it is the later half of Topic 1) and see if I can clarify something. 

Xu, Man posted on Tuesday, March 20, 2012  6:11 am



Dear Linda, I have just tested each pair of the multiple group structural parameters using the MODEL TEST. I found that, although the previous overal wald test of structural paths are statistically signficant, when the paths are tested one by one, none is signficantly differrent across the two groups. I wonder if there is anything that is not consistent among the two approaches and which one is better. Thanks a lot for any thoughtd and advice! The model has mlr estimator. Kate 


The overall test can have more power than the individual tests. This is not inconsistent. 

Xu, Man posted on Tuesday, March 20, 2012  10:07 am



Thanks! In this case, would it be safer to base model interpretation on the model with all paths freely estimated across groups? 


I would report the model with the parameters free and report that an overall test of all parameters equal was rejected. 

Yijie Wang posted on Monday, July 09, 2012  7:35 am



Hello, I plan to compare a path coefficient among four groups and want to know the overall difference among four groups. But I ran into an error message with the following syntax: y on x (a1); model g2: y on x (a2); model g3: y on x (a3); model g4: y on x (a4); model test: 0 = a1a2; 0 = a1a3; 0 = a1a4; 0 = a2a3; 0 = a2a4; 0 = a3a4; error message: WALD'S TEST COULD NOT BE COMPUTED BECAUSE OF A SINGULAR COVARIANCE MATRIX. Could you help me find out the problems in the syntax? Thanks a lot! Yijie 


Please send your output to Support. 

brianne posted on Tuesday, November 13, 2012  7:45 am



Am I correctly understanding the multiple groups analysis to say that if I keep all of the paths free and then using the Wald test to test whether the “a” path is the same, then I can just interpret the moderator model below? GROUPING IS CRISK (1=HIGH RISK 0=LOW RISK); MODEL: EMOR36 on CDIST14 PROGRAM CESDBL; INTR24 on CDIST14 ; EMOR36 on INTR24 ; CDIST14 CESDBL INTR24; MODEL HIGH RISK (dichotomized): EMOR36 on CDIST14 PROGRAM CESDBL; INTR24 on CDIST14 (P1); EMOR36 on INTR24; CDIST14 CESDBL INTR24; MODEL LOW RISK (dichotomized): EMOR36 on CDIST14 PROGRAM CESDBL; INTR24 on CDIST14 (P2); EMOR36 on INTR24; CDIST14 CESDBL INTR24; MODEL TEST: 0 = (P1P2); ANALYSIS: ESTIMATOR IS MLR; OUTPUT: SAMPSTAT MODINDICES(ALL) STANDARDIZED; 


Yes. MODEL TEST tells you if the moderation is significant. 

Jenny L. posted on Monday, May 13, 2013  8:12 pm



Dear Professors, I have a set of twowave longitudinal data, and I'd like to see whether the associations among several variables vary across time. Specifically, there are 6 paths I'd like to test. It appeared to me that both Wald tests and Chi square ratio/difference tests would be applicable, but could you tell me what the difference is? I tried both approaches but got different results. According to Wald tests, only 1 of the 6 paths was different from T1 to T2. But when I did chi square difference tests, it looked like 4 of the 6 paths have changed. Here's how I did chi square difference tests: I started by constraining all 6 paths to be equal across time and treated it as the baseline model. Then I freed one path a time and compared each new model's chi square value with the baseline model's, and see if the difference was larger than 3.84 (the critical value of df=1). For 4 of the 6 models, chi square difference was significant, which seemed to be inconsistent with what the Wald tests suggested. Another question I had was: when I do Wald tests, should I start with testing all 6 paths at once and treat it as an omnibus test (and only move on to test specific paths once the overall test is significant)? Thank you in advance for your help. 


Wald tests and likelihoodratio tests are expected to give similar results when they have the same df. One is not generally better than the other. It is unclear if you did the Wald test one at a time like you did for the LR chi2. I would test all 6 paths at once. 

Jenny L. posted on Monday, May 13, 2013  11:29 pm



Thank you for your prompt reply, Prof. Muthen. Yes I was doing Wald tests one at a time, so the results should be similar to those of LR chi2. However, while 2 of the paths showed similar results in the two tests, the other 4 were inconsistent. Here's the code I wrote: [Baseline model] model: SCC_T1 on fdbck_T1 SR_T1 auth_T1(1); fdbck_T1 on bth_T1 dth_T1 pos_T1 auth_T1(25); SR_T1 on bth_T1 dth_T1 fdbck_T1 int_T1(6); SCC_T2 on fdbck_T2 SR_T2 auth_T2(1); fdbck_T2 on bth_T2 dth_T2 pos_T2 auth_T2(25); SR_T2 on bth_T2 dth_T2 fdbck_T2 int_T2(6); [model for comparison: The path of interest is fdbck on bth]: model: SCC_T1 on fdbck_T1 SR_T1 auth_T1(1); fdbck_T1 on bth_t1 dth_T1 pos_T1 auth_T1(24); SR_T1 on bth_T1 dth_T1 fdbck_T1 int_T1(6); SCC_T2 on fdbck_T2 SR_T2 auth_T2(1); fdbck_T2 on bth_T2 dth_T2 pos_T2 auth_T2(24); SR_T2 on bth_T2 dth_T2 fdbck_T2 int_T2(6); The chi square difference between the two models were 5.382, whereas the wald test value was 2.041. All the 4 paths that showed inconsistencies between the two tests involved the variable "fdbck." Could you kindly tell me where the codes went wrong? Thank you for your advice. 


I don't see MODEL TEST. Send the relevant outputs and your license number to support@statemodel.com. 


Hello, I want to test whether the means are significantly different from one another between classes and have asked for a wald test. However, I'm receiving an error. Can you please help me? Thanks so much. Danyel MODEL ESTIMATION TERMINATED NORMALLY WALD'S TEST COULD NOT BE COMPUTED BECAUSE OF A SINGULAR COVARIANCE MATRIX. 


Please send the output and your license number to support@statmodel.com. 


Hi, I have a question regarding the covariance coefficients. I have a multi group model, and I want to test if the covariance between the two factors(f1 and f2) is equal across groups. When I have the covariance freely estimated(at the strong factorial invariance step), group 2 showed statistically significant covariance between f1 and f2, while group 1 and 3 didnt. BUT, when I equal the covariance between f1 and f2 across groups, the chi square different test comes out insignificant, meaning there are no group difference in the covariance between f1 and f2. Can this happen, where I see difference in coefficient significance across groups when freely estimated, but no group difference when the coefficients are equated across groups? Thank you for your help! 


The ztests that you obtain in the results section of the output compare the regression coefficient to zero. The equality test compares the regression coefficients to each other. A coefficient may be significantly different from zero but not significantly different from another coefficient. 0 b1 b3 b2 b3 and b2 may be different from zero but not from each other. 

Ari J Elliot posted on Wednesday, January 14, 2015  5:11 pm



Hello Drs. Muthen, I am conducting multigroup analyses in which I would like to test differences in path coefficients between two groups. I have established that factor loadings are invariant across groups, but intercepts are not. In the model I have set up, factor loadings and intercepts are constrained by default, and I then use the MODEL TEST command to compare specific parameters. Is it appropriate to compare path coefficients obtained using a multigroup model in which intercepts are constrained to be equal when they are in fact different? Instead, could one compare path coefficients when only (invariant) factor loadings are constrained? Whenever I try to free the intercepts the model is no longer identified. Wang's book on MPLUS states that "equality restrictions have to be imposed on item intercepts in order to make the mean structure part of the model identifiable." This seems to imply that intercepts need to be constrained equal for the model to be identified. However, another SEM program (AMOS) appears able to provide estimates as well as parameter comparisons with only loadings constrained (as well as fully unconstrained when identified). To summarize, when intercepts are not invariant, does it make more sense to compare path coefficients with intercepts constrained equal or not, and is the latter possible in general and in MPLUS? Thank you for your help! 


Path coefficients can be compared with only loading (metric/weak) invariance. For factor mean comparisons the additional intercept invariance is needed (scalar/strong invariance). You can free the intercepts when you fix the factor means to zero in all groups. Although the distortions may not be large, I wouldn't hold intercepts invariant if that is rejected by testing. The configural metric and scalar invariance models can be automatically set up by using the Model = configural metric scalar option of the Analysis command. You can also investigate partial invariance wrt the intercepts. 

Ari J Elliot posted on Wednesday, January 14, 2015  6:03 pm



Ok I see, the model was identified with the factor means fixed to zero. Thank you so much Dr. Muthen for your quick reply. 

Simon Schus posted on Wednesday, May 20, 2015  6:08 am



Hi there, I note that MODEL TEST cannot be combined with bootstrap. Are you able to advise what one could do if I wanted to test some direct and indirect effects across multiple groups, whilst retaining the bootstrapping procedure? Simon 


You could try to use MODEL CONSTRAINT to specify the indirect effects. 

Jinxin ZHU posted on Friday, July 03, 2015  9:54 am



Dear Prof. Muthen, I found a DIF item in my analysis and I decided to keep it. To examine the effect of keeping the DIF item, I want compare the results of the path analyses with and without the DIF item. Both of the analyses used twostep analysis employing plausible values and Rasch model. 1. Would you please suggest whether there is any method I can do use to test the differences between the two sets of the path coefficients from the path analyses with and with the DIF item? 2. I have used Wald test for coeficient comparison in multiple group analysis in other study before. However, this time the analyses with and without the DIF item are two seperate analyses. So the scales for these two analyses are different. Is Wald test still applicable in this case? 3. Do you think it is appropriate to consider the analyses with and without the DIF item as twogroups comparison in one analysis? (Still, what I am concerned is that the analyses with and without the DIF item are actually based on two different data sets). Thank you so much. 

ehrbc1 posted on Tuesday, March 01, 2016  3:50 am



Hello, I am trying to compare the total effects of pride to both otherfocused wellbeing with pride to selffocused wellbeing. 2 mediators are involved. I have included my syntax below. I am wondering why I am getting completely different answers depending on whether I use a summing technique for wellbeing (i.e., adding all the individual items together) versus an average score. The model standardised/unstandardised estimates are coming up exactly the same. I also find the exact same problem when I use the model test/wald analysis. Thanks for your help, Elizabeth MODEL: COMMUNAL on PRIDE (cp); COMMUNAL on COMP (cc); AGENTIC on PRIDE (ap); AGENTIC on COMP; OTHERWB on PRIDE (op); OTHERWB on COMP; OTHERWB on COMMUNAL (oc); OTHERWB on AGENTIC (oa); SELFWB on PRIDE (sp); SELFWB on COMP; SELFWB on COMMUNAL (sc); SELFWB on AGENTIC (sa); MODEL CONSTRAINT: NEW (TOTAL1 TOTAL2 DIFFERENCE); TOTAL1= ap*sa+cp*sc+sp; TOTAL2= ap*oa+cp*oc+op; DIFFERENCE = TOTAL1TOTAL2; 


The sum and the average are on different scales so you should not expect the same estimates. Only the standardized solutions are comparable, so effect sizes are the same. 

ehrbc1 posted on Tuesday, March 01, 2016  7:34 pm



Thanks for your response. So is there a way to run the model constraint option above on the standardised solutions so that the same results are produced irrespective of whether scales are averaged or summed? 


No, but you can standardize the effects in Model Constraint by doing the right multiplying and dividing by SDs. But why not settle on one or the other  sum or average. It won't matter in the interpretations. 

ehrbc1 posted on Wednesday, March 02, 2016  7:24 pm



Hello, Yes I plan to just use one or the other  I stumbled across this problem when I was switching one study over to summed scores for consistency purposes. However, it does appear to be effecting the interpretations of model constraint difference scores (total effects or mediation pathways). Not in all cases, but in some it does. Using the following syntax  and if I use summed scores for wellbeing (WB)  0 lies within the CI (but it doesn't when I use average). COMMUNAL on PRIDE (cp); COMMUNAL on COMP (cc); AGENTIC on PRIDE (ap); AGENTIC on COMP; OTHERWB on PRIDE (op); OTHERWB on COMP; OTHERWB on COMMUNAL (oc); OTHERWB on AGENTIC (oa); SELFWB on PRIDE (sp); SELFWB on COMP; SELFWB on COMMUNAL (sc); SELFWB on AGENTIC (sa); MODEL CONSTRAINT: NEW (MED1 MED2 DIFFERENCE); MED1= cc*oc; MED2= cc*sc; DIFFERENCE = MED2MED1; Is this to be expected given that the difference score is based on unstandardised estimates, or am I making some type of error? The correlation between summed and average scores is perfect, however. I do note that the the IV is average scored and the outcome is summed...... Thank you, Elizabeth 


Sounds like some sort of error. You can also request bootstrap CIs and see if you get different results. 

ehrbc1 posted on Thursday, March 03, 2016  9:31 pm



Thank you! My interpretations are now consistent across summed and average. I would now like to compare the difference in total effects of pride to SELFWB and pride to OTHERWB, preferably using the model constraint. So: MODEL CONSTRAINT: NEW (TOTAL1 TOTAL2 DIFFERENCE) TOTAL1=sp+ap*sa+cp*sc; TOTAL2=op+ap*oa+cp*oc; DIFFERENCE = MED2MED1; I'm unsure of what ANALYSIS and OUTPUT commands to state for these total effect comparisons. Thanks. 


None unless you want bootstrapping. But I am not sure I understand your question. 

ehrbc1 posted on Saturday, March 05, 2016  7:53 pm



Hi Bengt, Unfortunately, I am still experiencing some issues depending on whether I sum or average variables. For example, using the below constraint, both the unstandardised estimate divided by the standard error is remaining pretty much the same (couple of decimal difference) for each indirect effect, regardless of summing or averaging. It’s the new difference score testing the difference between indirect effects that has a different significance level and unstandardised estimate divided by standard error, when I switch from average to summed scores. MODEL CONSTRAINT: NEW (MED1 MED2 DIFFERENCE); MED1= cc*oc; MED2= cc*sc; DIFFERENCE = MED2MED1; I should note that the outcome variable in the mediation pathways (so o and s) above have a different number of individual items (36 versus 9), although both measured on 17 Likert Scale. Do you have any insight into what could be producing this disrepancy across summed and average scales? Thank you. 


Please send the outputs and your license number to support@statmodel.com. 

Pia H. posted on Wednesday, April 13, 2016  2:26 am



Hello, I think this has been written about before, but just to make sure I got this right: I have a SEM with two groups and two latent factors with categorical indicators, one of which is regressed on the other. To find out if the regression coefficients are significantly different between the two groups, I use one model where the regression between the factors is free and another model where it is equal across group and compare the model fit using DIFFTEST? I'm not sure if I read that is not possible to constrain an ON statement. Thank you very much, Pia 


ON statements can be constrained. 


Dear Drs. Muthen, I am doing a multiple group comparison (two groups) for the structural paths. Below is a simplified model. X > M1> M2>Y Model indirect: Y ind x; The results showed that in group 1, x was NOT significantly associated with M1. But in group 2, x was significantly associated with M1. Chisquare test showed that this path was significantly different between the two groups. The rest of paths were significant but did not differ across groups. The bootstrapping results showed that the 95% CI of the indirect effects excluded zero for group 2. In this case, can I conclude that in group 2, mediating effects held, while the mediating effects were not found with group 1? Or do I still have to test the difference of indirect effects? I don’t think I need to test the difference because in group 1, x was NOT significantly associated with M1, so the indirect effects were not significant anyway. However, I want to check with you. Also, as for reporting results,I read your conversation with Xu, Man on March 20, 2012, it seems that you suggest to report model with parameters free and report the path difference test. I wondered if it applies to my case too. My results look somewhat different when I constrained all equal paths. Thanks so much, Jing 

Back to top 