Message/Author 

Anonymous posted on Wednesday, April 06, 2005  5:17 am



I am running a MIMIC/DIF model with 10 ordinal items loading on a single factor and one covariate. Model (1) is: F by ITEMS110; F on COV. Model (2) adds: ITEMS210 on COV; ITEM1 on COV@1; In (1) the estimate for F on COV is 0.4 (say) and this makes sense as people high on the (dichotomous) COV are higher on the factor. In (2) the estimates for ITEMS210 on COV are all positive and sensible, but that for F on COV is now 0.8 (say). The switch is unrelated to which item is fixed to set the scale and occurs for a number of covariates. Assuming the model is specified correctly the switch in sign seems odd and not obviously (or comfortably) interpretable (to me at least). I've not seen any suggestions of it in the MIMIC/DIF literature. Any thoughts? 

BMuthen posted on Friday, April 08, 2005  2:47 am



Model 2 is unusual because all of the items have a direct effect from the covariate. Also, the direct effect for item 1 is artificially fixed to one. I would instead in Model 2 state, items110 ON cov @0; and look at modification indices to see which direct effects should be included in the model. 

Anonymous posted on Saturday, April 09, 2005  7:18 pm



I'm probably having some difficulty working out exactly the model fitted by various authors. The model above was based on my reading of Christensen et al 1999 (Psychological Medicine 29(2) 325339) where the MIMIC model as described in the footnote to Fig.1 seems to say that a simultaneous model is fitted with all paths from all COVs to all items free; except for the set of paths to a single item which is fixed to 1 to set a scale (direct effect you refer to above). One could (and probably should) simplify that via modification indicies, but in principle you could still have all the items there. I had intially also read your paper Gallo et al (1994) as doing the same thing but on rereading think that's wrong and take it that Table 6 is the result of a whole series of MIMIC models as follows where SINGLE_ITEM varies: F by ALL_ITEMS; F on MULTIPLE_COVARIATES; SINGLE_ITEM on SINGLE_COVARIATE; Am I misreading and how do we sort out what seem to be different approaches? 

BMuthen posted on Sunday, April 10, 2005  2:40 am



The model with covariates influencing both factors and all of the items directly is not identified. Artificially fixing one of the direct effects to a nonzero value does not make sense to me. A better approach is to start with a model with no direct effects and free the direct effects that are needed. I think that is what was done in the Gallo et al paper. 

Anonymous posted on Sunday, April 10, 2005  4:49 pm



Thank you for your comments. Fixing particular direct effects for identification certainly makes for an awkward interpretation (Grayson et al 2000 Journals of Gerontology 55B(5) P273P282, who appear to use a similar model to Christensen et al, go through this extensively). Can I take it that your suggested approach comes down to having indirect effects to all items but limiting direct effects to a single item at a time? (And even if you could have more than one item; and avoid funny fixes for identification issues; that the resulting model would not be sensible?) Sorry to drag this out: I really do appreciate your help. 

BMuthen posted on Monday, April 11, 2005  5:49 am



My approach advocates adding one indirect effect at a time. It is expected that the number of indirect effects is small in comparison to the total number. A model with many indirect effects would have such little measurement invariance to be of little interest. 

Christina Ow posted on Saturday, November 25, 2006  10:09 pm



When running a mimic model to test for measurement invariance, it isn't clear to my why you should constrain a direct path from the covariate to the factor to 0 (i.e., use the statement "f1 on x@0"). My notes from the short courses say that this will "open up the matrix for modification indices", but I don't understand that (bad note taking that day). Could you explain? Does this just tell the program that the observed variable (covariate) is included in the model, so when you ask for MI's, you can see where direct effects might be included? 


If you have 6 y variables and 2 x variables, you would say y1y6 ON x1x2; This allows you to obtain modification indices for direct effects. 


Thanks so much for your help. I tried what you suggested, but I still have questions. I want to 1) test for factorial invariance and DIF for males vs females, 2) be able to explain why a statement like Y1Y6 ON X1X2@0 provides the modification indices I am looking for. I thought that by setting the path from the covariate to the factors, it is like adding the covariate to the observed correlation matrix, which then incorporates the variable into the estimated model giving you modification indices for all direct paths from the covariate. It appears that to get all the MIs, I need two statements: f1 on group@0 AND cq2...ON group@0. Can you explain why I need both statements? Is it appropriate to run both at the same time? My model is: f2 by cq2 cq7 cq11 cq18 cq22 cq13 cq26; f2 ON group@0; cq2 cq7 cq11 cq18 cq22 cq13 cq26 ON group@0; 


The statement y1y6 ON x1x2@0; simply opens a matrix so that you obtain modification indices for those regression coefficients. Otherwise, the matrix would not be open and you would not obtain modification indices. It is a technical behind the scenes issue that has nothing to do with model estimation. We open as few matrices as posibble for space and speed considerations. When you say f2 ON group@0; you are fixing the regression coefficient for f2 regressed at group to zero. If you are testing for measurement invariance, you want modification indices for direct effects. A significant direct effect shows differential item function (DIF). 

SSK posted on Tuesday, December 08, 2015  10:49 am



Hello, I have a Complex MIMIC Model with four categorical outcome variables with 5 categories regressed on three latent variables and a whole host of binary covariates. [Example of Model: Categorical are Y1 Y2 Y3 Cluster is Village; .... U1 by a1a10; U2 by b1b10; U3 by c1c10; U4 by d1d10; U1U3 on U4; Y1 Y2 Y3 on D x115; x15 on U1U3] I have a strong model with good fit statistics. But, I am having some issues interpreting my results. I have gone through your handouts and videos and understand how to interpret it as you stated in the Topic 1 Video but my reviewers are seeking probabilities/marginal effects. Do I use the same formula in Chapter 14 for probit and logit probabilities? How do I handle the latent variable in this calculation? If there is any paper you recommend where coefficients have been expressed as probabilities in MIMIC or SEM, I would be very grateful. 


You can get some ideas from the example ending with slides 162264 of the Topic 2 handout. 

SSK posted on Wednesday, December 09, 2015  10:13 am



Thank you Bengt for your reply. On Slide 163 of Topic 2, you calculate item probabilities for probit regression. I'm a little confused where to get the lambda estimate when a latent variable is informed by several observed variables. For logit coefficients would I just simply interpret odds as in slide 34 (Topic 2 handout)? Also you refer to Topic 2 slides 162264 but I found that there were only 214 slides. This is the version I was looking at: https://www.statmodel.com/download/Topic%202v14.pdf Is this what you were referring? Sorry if these are quite simplistic questions! Thank you 


If you don't have factor loadings through f BY ...; you have regression slopes through ...ON f; So that's the same thing. Yes, logit slopes are handled via odds. I meant to say 162164. 

SSK posted on Friday, January 15, 2016  2:35 am



Hi there, If you are trying to get probabilities for a categorical latent ouctome informed by another latent variable that is also categorical is it correct to estimate the probabilities using the scale of that latent variable? So imagine: Y by a1a5* U by a6a10* [*Categorical, scale 15] Y on U Z1 Z2 Z3 [Z binary covariates] so: P(Y=1x)=F(t1U*x1+Z1*x2...) where U and Z1 are the unstandardised Coefficients. =F(0.842(0.206*4)+(0.346*0)+... The remaining levels of the equations would be: P(Y=2x)=F(t2  b1*x1  b2*x2...)  F(t1b1*x1  b2*x2...) P(Y=3x)=F(t3  b1*x1  b2*x2...)  F(t2  b1*x1  b2*x2...)  F(t1  b1*x1  b2*x2...) P(Y=4x)=F(t4 + b1*x1  b2*x2  b3*x3...))  F(t3  b1*x1  b2*x2...)  F(t2  b1*x1  b2*x2...)  F(t1  b1*x1  b2*x2...) P(Y=5x)=F(t4 + b1*x1 + b2*x2 + b3*x3...)) Am I on the right path? 

SSK posted on Friday, January 15, 2016  7:45 am



sorry one more question: To get probabilities for moderator within a MIMIC model what is the equation when dealing dealing with latent variables? Say my model is: U1 by a1a5 U2 by b1b5 U3 by c1c5 U4 by d1d5 U1 on U2 U3 U4 Y1 on U1 Where Y14* are categorical observed outcome variable with 5 levels. I want to interpret the effect of the latent variables on each other (U1 on U2 U3 U4) as probabilities but if this is not possible do I just interpret it as you would a normal regression (1 unit increase in U1 leads to a .5 SD increase in U2) Bit confused!!! Greatly appreciate any help you could provide! 


First post: Your statements Y BY .... and P(Y=1x) are contradictory because Y BY defines a continuous factor, not a categorical one. Categorical latent variables are latent classes and are not defined by BY. 


Second post: Use regular linear regression interpretations because you have continuous latent variables. 


Hello, I ran a MIMIC model using WLSMV to estimate DIF effects between certain background variables and depression items(coded from 0 to 3). If I understand correctly, under WLSMV estimation DIF coefficients are obtained using probit regression and they should be interpreted as a change in z score for a one unit increase in the predictor. I would like to know whether there are any guidelines for interpreting the magnitude of standardized probit coefficients. For instance, can I say that standardized probit coefficients below 0.30 indicate a small effect? I hope you can help advise. Many thanks. 


I also tried to run the MIMIC model using MLR estimation in order to obtain odds ratios for DIF effects. My factor model consisted of 4 factors(N=3107). I used integration=montecarlo(5000), but the model did not converge. The error message was: THE MODEL ESTIMATION DID NOT TERMINATE NORMALLY DUE TO A NONZERO DERIVATIVE OF THE OBSERVEDDATA LOGLIKELIHOOD. THE MCONVERGENCE CRITERION OF THE EM ALGORITHM IS NOT FULFILLED. CHECK YOUR STARTING VALUES OR INCREASE THE NUMBER OF MITERATIONS. ESTIMATES CANNOT BE TRUSTED. THE LOGLIKELIHOOD DERIVATIVE FOR PARAMETER 24 IS 0.20908945D+00. Copying the suggested starting values into my model command did not help the model to converge. Could you please advise on how I could get the model to converge? Is my syntax correct? Thanks very much. Analysis: Estimator = MLR; INTEGRATION = montecarlo(5000); Model: Dep by bcesd03 bcesd06 bcesd09 bcesd10 bcesd14 bcesd17 bcesd18; Pos by bcesd04 bcesd08 bcesd12 bcesd16; Som by bcesd01 bcesd02 bcesd05 bcesd07 bcesd11 bcesd13 bcesd20; Int by bcesd15 bcesd19; Int @1; Dep with Pos Som Int; Pos with Som Int; Som with Int; Dep Pos Som Int on sex bage bmwtdr bmtotal bcraven Bcode; BCESD14 ON BAGE; BCESD17 ON SEX; BCESD08 ON BAGE; BCESD11 ON SEX; BCESD04 ON SEX; 


Hello, I am estimating MIMIC models that specify dichotomous variables as the exogenous covariates, so that any detected differences are more clearly delineated. In my study, the exogenous covariates are not evenly balanced. I've consulted the MIMIC literature and the MPlus discussion board on this issue, which don't yield a clear answer. Does a lack of balance on the exogenous covariate matter? For example, 73% of participants fall into one category and 60.8% of participants identify with the other category. Is there a citation that would support this uneven split for dichotomous, exogenous variables in MIMIC models? Thank you! 


I don't think you'll find a citation. An uneven split is ok, but I would expect that the less even split your binary covariates are the lower the power to detect diffs. 


Hello, Perhaps this is a very general question, please bear with me. I'm trying to detect DIF by two methods and I'm getting different results in regards to which items show DIF. One method is MIMIC (In Mplus). The other method is IRTbased (the CochranMantelHantzel ChiSquare) in a partial credit model, with a software different to MPlus. Any ideas as to why would I get different results regarding which items show DIF? 


Greetings. Another question related to DIF interpretation. I'm comparing two models, one constrained, and another unconstrained, on the items that I have identified as candidates for DIF. I'm comparing the difference in intercepts between the two models. Are you aware about any conventional threshold/literature to flag a difference as "large"/'significant", using this approach? Thank you. 


You should do the mimic model using the partial credit model. The second question about large diffs is suitable for SEMNET. 

Back to top 