Anonymous posted on Wednesday, April 06, 2005 - 5:17 am
I am running a MIMIC/DIF model with 10 ordinal items loading on a single factor and one covariate.
Model (1) is: F by ITEMS1-10; F on COV. Model (2) adds: ITEMS2-10 on COV; ITEM1 on COV@1;
In (1) the estimate for F on COV is 0.4 (say) and this makes sense as people high on the (dichotomous) COV are higher on the factor.
In (2) the estimates for ITEMS2-10 on COV are all positive and sensible, but that for F on COV is now -0.8 (say).
The switch is unrelated to which item is fixed to set the scale and occurs for a number of covariates.
Assuming the model is specified correctly the switch in sign seems odd and not obviously (or comfortably) interpretable (to me at least). I've not seen any suggestions of it in the MIMIC/DIF literature. Any thoughts?
BMuthen posted on Friday, April 08, 2005 - 2:47 am
Model 2 is unusual because all of the items have a direct effect from the covariate. Also, the direct effect for item 1 is artificially fixed to one. I would instead in Model 2 state,
items1-10 ON cov @0;
and look at modification indices to see which direct effects should be included in the model.
Anonymous posted on Saturday, April 09, 2005 - 7:18 pm
I'm probably having some difficulty working out exactly the model fitted by various authors. The model above was based on my reading of Christensen et al 1999 (Psychological Medicine 29(2) 325-339) where the MIMIC model as described in the footnote to Fig.1 seems to say that a simultaneous model is fitted with all paths from all COVs to all items free; except for the set of paths to a single item which is fixed to 1 to set a scale (direct effect you refer to above). One could (and probably should) simplify that via modification indicies, but in principle you could still have all the items there.
I had intially also read your paper Gallo et al (1994) as doing the same thing but on re-reading think that's wrong and take it that Table 6 is the result of a whole series of MIMIC models as follows where SINGLE_ITEM varies: F by ALL_ITEMS; F on MULTIPLE_COVARIATES; SINGLE_ITEM on SINGLE_COVARIATE;
Am I mis-reading and how do we sort out what seem to be different approaches?
BMuthen posted on Sunday, April 10, 2005 - 2:40 am
The model with covariates influencing both factors and all of the items directly is not identified. Artificially fixing one of the direct effects to a non-zero value does not make sense to me. A better approach is to start with a model with no direct effects and free the direct effects that are needed. I think that is what was done in the Gallo et al paper.
Anonymous posted on Sunday, April 10, 2005 - 4:49 pm
Thank you for your comments. Fixing particular direct effects for identification certainly makes for an awkward interpretation (Grayson et al 2000 Journals of Gerontology 55B(5) P273-P282, who appear to use a similar model to Christensen et al, go through this extensively). Can I take it that your suggested approach comes down to having indirect effects to all items but limiting direct effects to a single item at a time? (And even if you could have more than one item; and avoid funny fixes for identification issues; that the resulting model would not be sensible?) Sorry to drag this out: I really do appreciate your help.
BMuthen posted on Monday, April 11, 2005 - 5:49 am
My approach advocates adding one indirect effect at a time. It is expected that the number of indirect effects is small in comparison to the total number. A model with many indirect effects would have such little measurement invariance to be of little interest.
Christina Ow posted on Saturday, November 25, 2006 - 10:09 pm
When running a mimic model to test for measurement invariance, it isn't clear to my why you should constrain a direct path from the covariate to the factor to 0 (i.e., use the statement "f1 on x@0"). My notes from the short courses say that this will "open up the matrix for modification indices", but I don't understand that (bad note taking that day). Could you explain? Does this just tell the program that the observed variable (covariate) is included in the model, so when you ask for MI's, you can see where direct effects might be included?
Thanks so much for your help. I tried what you suggested, but I still have questions.
I want to 1) test for factorial invariance and DIF for males vs females, 2) be able to explain why a statement like Y1-Y6 ON X1-X2@0 provides the modification indices I am looking for.
I thought that by setting the path from the covariate to the factors, it is like adding the covariate to the observed correlation matrix, which then incorporates the variable into the estimated model giving you modification indices for all direct paths from the covariate.
It appears that to get all the MIs, I need two statements: f1 on group@0 AND cq2...ON group@0.
Can you explain why I need both statements? Is it appropriate to run both at the same time?
The statement y1-y6 ON x1-x2@0; simply opens a matrix so that you obtain modification indices for those regression coefficients. Otherwise, the matrix would not be open and you would not obtain modification indices. It is a technical behind the scenes issue that has nothing to do with model estimation. We open as few matrices as posibble for space and speed considerations.
When you say f2 ON group@0; you are fixing the regression coefficient for f2 regressed at group to zero. If you are testing for measurement invariance, you want modification indices for direct effects. A significant direct effect shows differential item function (DIF).
SSK posted on Tuesday, December 08, 2015 - 10:49 am
Hello, I have a Complex MIMIC Model with four categorical outcome variables with 5 categories regressed on three latent variables and a whole host of binary covariates.
[Example of Model: Categorical are Y1 Y2 Y3 Cluster is Village; .... U1 by a1-a10; U2 by b1-b10; U3 by c1-c10; U4 by d1-d10; U1-U3 on U4; Y1 Y2 Y3 on D x1-15; x1-5 on U1-U3]
I have a strong model with good fit statistics. But, I am having some issues interpreting my results. I have gone through your handouts and videos and understand how to interpret it as you stated in the Topic 1 Video but my reviewers are seeking probabilities/marginal effects.
Do I use the same formula in Chapter 14 for probit and logit probabilities? How do I handle the latent variable in this calculation?
If there is any paper you recommend where coefficients have been expressed as probabilities in MIMIC or SEM, I would be very grateful.
If you are trying to get probabilities for a categorical latent ouctome informed by another latent variable that is also categorical is it correct to estimate the probabilities using the scale of that latent variable?
Y by a1-a5* U by a6-a10*
[*Categorical, scale 1-5]
Y on U Z1 Z2 Z3
[Z binary covariates]
where U and Z1 are the unstandardised Coefficients.
To get probabilities for moderator within a MIMIC model what is the equation when dealing dealing with latent variables? Say my model is:
U1 by a1-a5 U2 by b1-b5 U3 by c1-c5 U4 by d1-d5
U1 on U2 U3 U4 Y1 on U1
Where Y1-4* are categorical observed outcome variable with 5 levels.
I want to interpret the effect of the latent variables on each other (U1 on U2 U3 U4) as probabilities but if this is not possible do I just interpret it as you would a normal regression (1 unit increase in U1 leads to a .5 SD increase in U2)
Bit confused!!! Greatly appreciate any help you could provide!
I ran a MIMIC model using WLSMV to estimate DIF effects between certain background variables and depression items(coded from 0 to 3). If I understand correctly, under WLSMV estimation DIF coefficients are obtained using probit regression and they should be interpreted as a change in z score for a one unit increase in the predictor. I would like to know whether there are any guidelines for interpreting the magnitude of standardized probit coefficients. For instance, can I say that standardized probit coefficients below 0.30 indicate a small effect?
I also tried to run the MIMIC model using MLR estimation in order to obtain odds ratios for DIF effects. My factor model consisted of 4 factors(N=3107). I used integration=montecarlo(5000), but the model did not converge. The error message was: THE MODEL ESTIMATION DID NOT TERMINATE NORMALLY DUE TO A NON-ZERO DERIVATIVE OF THE OBSERVED-DATA LOGLIKELIHOOD. THE MCONVERGENCE CRITERION OF THE EM ALGORITHM IS NOT FULFILLED. CHECK YOUR STARTING VALUES OR INCREASE THE NUMBER OF MITERATIONS. ESTIMATES CANNOT BE TRUSTED. THE LOGLIKELIHOOD DERIVATIVE FOR PARAMETER 24 IS -0.20908945D+00. Copying the suggested starting values into my model command did not help the model to converge. Could you please advise on how I could get the model to converge? Is my syntax correct? Thanks very much.
Analysis: Estimator = MLR; INTEGRATION = montecarlo(5000); Model: Dep by bcesd03 bcesd06 bcesd09 bcesd10 bcesd14 bcesd17 bcesd18; Pos by bcesd04 bcesd08 bcesd12 bcesd16; Som by bcesd01 bcesd02 bcesd05 bcesd07 bcesd11 bcesd13 bcesd20; Int by bcesd15 bcesd19; Int @1;
Dep with Pos Som Int; Pos with Som Int; Som with Int; Dep Pos Som Int on sex bage bmwtdr bmtotal bcraven Bcode;
BCESD14 ON BAGE; BCESD17 ON SEX; BCESD08 ON BAGE; BCESD11 ON SEX; BCESD04 ON SEX;
I am estimating MIMIC models that specify dichotomous variables as the exogenous covariates, so that any detected differences are more clearly delineated. In my study, the exogenous covariates are not evenly balanced. I've consulted the MIMIC literature and the MPlus discussion board on this issue, which don't yield a clear answer. Does a lack of balance on the exogenous covariate matter? For example, 73% of participants fall into one category and 60.8% of participants identify with the other category. Is there a citation that would support this uneven split for dichotomous, exogenous variables in MIMIC models?
Perhaps this is a very general question, please bear with me. I'm trying to detect DIF by two methods and I'm getting different results in regards to which items show DIF. One method is MIMIC (In Mplus). The other method is IRT-based (the Cochran-Mantel-Hantzel Chi-Square) in a partial credit model, with a software different to MPlus. Any ideas as to why would I get different results regarding which items show DIF?