A problem met when fitting Mixed Rasc... PreviousNext
Mplus Discussion > Latent Variable Mixture Modeling >
Message/Author
 Wenchao Ma posted on Wednesday, December 08, 2010 - 9:08 pm
Hi,
I wanna analyse my data using Mixed Rasch Model(Rost,1990) with Mplus, bellow are my syntax according to ex.7.27:
VARIABLE: NAMES = m1-m26;
USEVARIABLES ARE M1-m26;
CATEGORICAL = M1-m26;
CLASSES = c (2);
ANALYSIS: TYPE = MIXTURE;
ALGORITHM = INTEGRATION;
STARTS = 50 10;
MODEL: %OVERALL%
f by m1-m26@0.8(1);
[f@0];
%c#1%
f;
[M1$1-m26$1];
%c#2%
f;
[M1$1-m26$1];
OUTPUT: STAND TECH1 TECH7 TECH8;
two questions:
#1)syntax"f by m1-m26@0.8(1)"; can only get equal loadings between two latent classes,but I want to constrain equal discriminations(in IRT) between two latent classes.How should I do?
#2)the person parameters(ability) and item parameters(difficulty in IRT) between two latent classes are in the same scale? I mean, can they compare to each other directly?
 Linda K. Muthen posted on Friday, December 10, 2010 - 10:04 am
1. A Rasch model has factor loadings equal and factor variance fixed at one.

MODEL: %OVERALL%
f by m1-m2* (1);
f@1;
[f@0];

2, Yes.
 Wenchao Ma posted on Wednesday, December 29, 2010 - 4:38 pm
Thank you for your reply.
and I have another question:
How can I constraint the mean of all items difficulties in each latent class equal to zero in MPLUS?
 Bengt O. Muthen posted on Wednesday, December 29, 2010 - 4:57 pm
You have to label the item difficulty parameters in the Model command and then create a new parameter in Model Constraint that is expressed as the average and then set that equal to zero.
 Jane Jin posted on Thursday, May 31, 2012 - 2:48 pm
Hi,

I obtained the DIF contrast (difference between item difficulty parameters between two groups) in Winsteps, and Winsteps put the scale from both (latent) groups to be equal by setting "the mean item difficulty = 0" (direct quote from Dr. Linacre in another thread).
In your second response in the current thread (see above), you mentioned that the item parameters can be directly compared because item paramters between two latent classes are on the same scale (if I understand it correctly). Could you please tell me how Mplus did to put the items paramters from different latent known classes on the same scale?

Here is my input file:

model:
%overall%
f1@1;
f1 by item1 - item10* (1);

%cg#1%
[f1];
[item1$1-item10$1];
%cg#2%
[f1];
[item1$1-item10$1];

I directly compared the item parameters between groups and it didn't seem to be right. Could you please point out where it went wrong? I basically wanted to replicate multiple group Rasch models (in order to obtain the DIF constrat)done in Winsteps by Mplus.

Thank you for your time.

Jane
 Bengt O. Muthen posted on Thursday, May 31, 2012 - 6:26 pm
The factor variance is the same across the latent classes and the loadings are too. This is why the difficulties are in the same metric across the classes.

In your setup you get a non-identified model because you are saying that both the thresholds (difficulties) and the factor means are class specific. You should fix the factor mean to zero in both classes.
 Jane Jin posted on Friday, June 01, 2012 - 3:19 am
Thank you for your quick response.
I estimated factor means at both classes because the distributions of the latent factor (f) for the two groups were generated as if one group had higher mean than the other group. So how should I estimate the factor means for both classes? Will the estimate of mean latent class variable (cg) take care of this?

Thank you!
 Jane Jin posted on Friday, June 01, 2012 - 3:28 am
Here is part of my results to illustrate my question above.

After fixing [f1@0] at the overall level, the 1000 generation yielded lamda of 1.005, the threshold of the studied item was around 0.6 for one group, whereas 1.9 for the other group. The DIF size generated is 0.3, and the two groups' latent means differed by one unit. 1.9-0.6=0.3 (the population DIF size) + 1.0 (the mean difference between groups).

If thresholds are directly comparable, how can I adjust this mean difference between groups?

Thanks!
 Bengt O. Muthen posted on Friday, June 01, 2012 - 8:40 am
It sounds like you have generated your data with different factor means in the two cg classes. Then you should allow the factor means to be different in your analysis. You can identify this factor mean difference when the thresholds are held equal across the two classes, fixing the factor mean to zero in one class and estimating the factor mean in the other class. So, saying

model:
%overall%
f1@1;
f1 by item1 - item10* (1);

%cg#1%
[f1@0];
[item1$1-item10$1] (t1-t10);
%cg#2%
[f1];
[item1$1-item10$1] (t1-t10);

This assumes that you generated the data with equal thresholds in the two classes. And that the factor variance was generated with the same value in the two classes.
 Jane Jin posted on Friday, June 01, 2012 - 11:52 am
Thank you very much for the response.

Here is a follow up question.
I've run the command you provided above. The factor means were estimated correctly. However, by constraining the thresholds equal across groups, the DIF contrast index between items cannot be esimated. It seems that I can relax the equality constrain for the studied item (because I know which item is DIF-present by simulation). In reality, we will not know which of the items are DIF-present. That's the reason why I estimated thresholds freely for each group at the first place (Winsteps seems to estimate both groups' item parameters without equality constrains). I insisted on using Mplus to run a Rasch model because I'd like to see the impact of different parameter estimation method on certain parameter estimates (the default estimator in Winsteps is Joint ML).

So, my question is how to correctly estimate the DIF contrast (the difference between item paramters) after adjusting the group mean difference (i.e., factor mean difference)? Is there a direct way to do it than estimating the mean difference in anther step (e.g., equality constrains on thresholds between groups)?

Thank you for your time and patience.

Jane
 Bengt O. Muthen posted on Friday, June 01, 2012 - 12:05 pm
A common approach is to allow the thresholds to be different across classes for one item at a time and look at chi-2 based on 2* logL diffs. Such a model is still identified.
 Jane Jin posted on Saturday, June 02, 2012 - 6:08 am
Thank you, I think now it works.

Jane
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: