Message/Author 


How do I obtain a Raschmodel with an item discrimination parameter of 1.00 in logit metric? I have used this syntax: MODEL: f1 By v1v12* (1); f1@1; And got an item discrimination parameter of 0.688 in probit metric (corresponding to a factor loading of 0.567) 


Use ESTIMATOR=ML or MLR and the following model: MODEL: f1 BY v1v12@1; 


Hi, Recently I read an article about an interesting approach for measurement of change in pretestposttest design. It is called linear logostic model of change (LLMC) and according to the article it is an extension of a Rasch model to measure both individual and group changes on a logit scale (Dimitrov, D. M., McGee, S.M.& Howard, B.C. (2002) Changes in students' science ability produced by multimedia learning environments: Application of the linear logistic model for change. School Science and Mathematics, 102(1), pp. 1523). Is there an application within Mplus by which we can approach this model? Would you recommend any reading that might help me to understand LLMC and its application? Thank you, AnnaMari 


Looking at the article, it seems that is a Rasch model applied to two time points and two groups (tx/ctrl). This can be done in Mplus. A Rasch model is specified for a set of binary outcomes by holding the factor loadings equal across the factor indicators and fixing the factor variance at 1 (the factor mean is zero). Analysis of two groups and two time points can be done in line with Topic 2 of our courses (longitudinal factor analysis), using a multiplegroup approach (tx and ctrls) holding the thresholds and loadings equal across time and group and letting the factor mean and variance be free at the second time point. Ability change due to treatment can then be evaluated. I am not aware of papers/readings that show the details, but perhaps you can contact the authors  I think the first author uses Mplus. Note that it is not necessary to use a Rasch model, but a 2parameter logistic or probit model is also possible. 


Thank you for your response. I greatly appreciate it! 


Drs. Muthén, I would like to analyze a two time pointtwo group data the way you are suggesting in the post above (Bengt O. Muthén posted on Sunday, March 14, 2010  6:08 pm). The problem I have is that from pretest to posttest we changed the scale from a 0 to 2 scale to a 0 to 4 scale. Would you standardize the items prior to the analysis? Do you have any suggestions on how to deal with changed measurement scheme? Thank you, AnnaMari 


I would not standardize. I don't think there is anything you can do to correct this problem. See the following paper for the reasons: The Metric Matters: The Sensitivity of Conclusions About Growth in Student Achievement to Choice of Metric. Educational Evaluation and Policy Analysis Spring 1994, Vol. 16, No. 1, pp. 4149 


Regarding the IRT discrimination parameter: What does the numeric value of the discrimination parameter tell us in terms of the basic logistic model? Although I intuitively understand that it is analagous to a factor loading (slope) and is a measure of how well the item discriminates those low vs. high on the trait, I have seen differing remarks on its range (some saying 0  1 others 0 to + infinity). For example, lets say alpha (discrimination) = 3.05. Does this then mean that for a 1unit increase in the latent trait theta that the probability of the individual being coded as present for a binary behavior (or endorsing a binary item) increases by approximately 3x (i.e., 3x more likely)? 


The discrimination coefficient is in the metric of the loading, is in the metric of a slope in a regular logistic regression, which is the same as the IRT model except the IV is latent. Therefore, the range is minus infinity to plus infinity. Your oneunit explanation is correct. Following the lead of logistic regression you can also exponentiate the discrimination and interpret in odds ratio terms. See logistic regression books and see IRT books such as Reckase's. The 01 range may come from the classical test theory (not IRT) concept of item discrimination which was the correlation between an item and the total test score. 


Thank you Bengt. To correct my own example above, exponetiating a discrimination coefficient (logit) of 3.05 would give ~21.12; thus a 1unit increase in theta would be associated with a 21x greater likelihood of endorsing the '1' option of the binary item. 


A quick followup to the exponentiation approach to odds. Since there is no analog to odds coefficients in probit, is it necessary to transform back into a logit metric before exponentiating to get the odds? I ask as I have run models using both limited and fullinformation estimators. 


Post 1: Sounds right. Post 2: Transforming to logit from probit is only approximate  I wouldn't do the odds interpretation with probit (so not with WLSMV). Note also that the odds interpretation doesn't seem prevalent in the IRT literature. 


Hello: I have been replicating some IRT 2PLM models from Mplus in R using the LTM package. In general, the results mapped on quite nicely to one another. I noticed for two models, that the discrimination and location parameters were inverted when using the LTM package (that is,  to + moving from Mplus to LTM). Digging a bit deeper I determined this was because in the LTM package, in the case of the onefactor model, the optimization algorithm works under the constraint that the discrimination parameter of the first item is always positive (this can be manually changed). Naturally, my question/observation (seeing as Mplus must not have this constraint in the optimization routine) is wouldn't a routine that allows the direction of the item discrimination to be freely estimated better approximate the sample estimate/direction of that parameter? Or does it not really matter (for interpretation of difficulty estimates, say) since the change in direction is really just a rotation of the latent trait? 


If all loadings change sign it doesn't matter because then the factor is simply reversed. 


Thanks, Bengt. One other thing that puzzles me is that in one of the models, I get an implausibly high discrimination parameter value in Mplus (222.69) whereas this discrimination parameter is 24.99 in the ltm package. What's interesting is that the ltm standard error is large (500.43) whereas the Mplus standard error is rather small (1.69). I am wondering what might trigger this. I have set all Mplus settings to be similar to the ltm routine including the following to match it: INTEGRATION=GAUSSHERMITE(35); ITERATIONS=150; MITERATIONS=100; LL, AIC, BIC are all relatively close between the two runs, its just that one parameter. The binary data are extremely sparse, but so far models have held up in converging across different random start values for the most part. In the model concerned, there was a parameter that needed fixed to avoid singularity of the information matrix, so I am wondering if that may have something to do with it. 


You should make sure that the LLs agree exactly. You can sharpen the convergence criterion in Mplus. And, I would delete ITERATIONS=150; MITERATIONS=100; and use the Mplus defaults instead to make sure it has converged properly. 

Back to top 