Anonymous posted on Sunday, December 19, 2004 - 3:11 pm
I have a question about combining IRT and SEM. Specifically, I have a dataset that is structured in a way that suggests I use IRT models for the dependent variable. So, I am pleased to see that you have a model in Mplus 3.0(users guide ex. 5.5). Is it possible and advisable to use a modified version of the example as the D.V. in a full SEM? That is, I want to develop syntax in Mplus that combines ex 5.5 with a SEM model that has 3 latent factors influencing the factor developed by the modified ex. 5.5. Is this possible?
bmuthen posted on Sunday, December 19, 2004 - 3:21 pm
Yes, this is possible. These are the kinds of model combinations that we hope the Mplus modeling framework stimulate.
Anonymous posted on Tuesday, December 21, 2004 - 5:52 am
I have had the pleasure of working with Rasch models before through winsteps. So, this leads me to two questions. First, how may I use the IRT model in example 5.5 to provide me with Rasch information rather than the two-parameter or the Samejima? Second, is there a guide as to how to interpret the output for the example in 5.5? Please, point in the correct direction with these two issues and I am ready to go with my study. Thank you.
To get a Rasch model, either fix all of the factor loadings to one or hold the factor loadings equal and fix the factor variance to one.
The content of the columns of the output are described in the Mplus User's Guide under the OUTPUT command. Given that Example 5.5 uses a two=parameter logistic model, the relationship between the a and b parameters of IRT and the Mplus parameters is as follows:
a = factor loading b = threshold/factor loading
Some programs use a constant of approximately 1.76 to put the coefficient on a probit scale.
Anonymous posted on Tuesday, December 21, 2004 - 4:17 pm
I yet another question about the interpretation of the IRT. The model I am running has ordered category responses and a sample size of over 2000. In my output, I got a message that said that the chi-square could not be computed because the frequency table for the latent class indicator model part is too large. What does this mean, and does it suggest doom for my model? Any assistance at all would be greatly appreciated. Thank you.
bmuthen posted on Tuesday, December 21, 2004 - 4:47 pm
Do not despair. In this case, even if we could fit the frequency table in the memory of the computer, you would not want to trust the chi-square value. With the huge number of cells in the table, you are bound to have many many cells with small expected values so that the chi-square approximation is not at all valid. To judge the fit of the model against the data in such cases, we suggest instead working with the bivariate standardized residuals of Tech10. Or work with adjacent, nested models and 2 times the log likelihood difference as a chi-square test of relaxing or restricting a model.
Anonymous posted on Tuesday, December 21, 2004 - 5:13 pm
Great, I am breathing again. But, I have a few more questions for clarity. How would I judge the fit of the model working with the bivariate standardized residuals in Tech 10? And, I am unclear about how the second option works? For the second option, are you suggesting the following things: 1) I develop alternative models, 2) take 2 times the log loglikelihood for difference [chi-square test of relaxing or restricting the model] for each model (i.e., my main model and alternative models, 3) compare the differences from the from all of the chi-square tests of relaxing or restricting the model?
bmuthen posted on Tuesday, December 21, 2004 - 5:20 pm
In option 1, The Tech10 biv residuals should be treated as z scores, so comparing to +- 1.96 at the 5% level - you don't want very many of those. You are stating option 2 correctly. It is not as straightforward as modeling testing with continuous outcomes - in IRT modeling with many items or cells, seldom is a test against the raw data done.
Anonymous posted on Tuesday, December 21, 2004 - 5:56 pm
I have just one last question. So, with option 2, is the reduction or increase of the chi-square test of relaxing or restricting the model an indication of better fit?
Thank you so much for all of your help.
BMuthen posted on Tuesday, December 21, 2004 - 6:19 pm
Anonymous posted on Sunday, January 16, 2005 - 6:23 am
Just a quick question. I recently read the Glockner-Rist and Hoijtink article (2003) in Sructural Equation Modeling that combines IRT and SEM using Mplus. I was wondering if these models can or have been extended to rating scale data? If so, are their any citations that demonstrate these applications?
I have a question about the tech10 output. I ran a 2-factor CFA model with 36 categorical indicators in MPlus 3.12 and had a sample size of 1500. I also got the message "the chi-square could not be computed because the frequency table for the latent class indicator model part is too large". When I added "tech10" in my syntax file, I found the tech10 output was empty (without any warning massage). In this case, what should I do to judge the model fit?
Da C posted on Monday, November 23, 2009 - 5:12 pm
Is it possible to create a single plot that contains information curves for several individual items in Mplus? Or, is there any other (faster) way to extract the information plot data besides creating each individual item plot and saving the data one plot/one item at a time?
There is currently not a way to do this. It is on our to-do list.
Lan Huang posted on Wednesday, March 17, 2010 - 7:47 am
Dear Dr. Muthen,
I have 74 binary items, sample size=181. I tried to fit a unidimensional model to test the unidimensionality of the data. I got a message that said that the chi-square could not be computed because the frequency table for the latent class indicator model part is too large. Is there any way so I can get absolute model fits? like chi-square, RMSEA? I read the old posts on this page and used the Tech10 command but I don't know how to do with it. Even after reading your answer "In option 1, The Tech10 biv residuals should be treated as z scores, so comparing to +- 1.96 at the 5% level - you don't want very many of those." Sorry...
Also, I noticed that in the output of TECH10, at the end of "BIVARIATE MODEL FIT INFORMATION", there is a " Overall Bivariate Pearson Chi-Square 2110.833". Is this the chi-square I'm looking for?
Thanks very much for your time. I really appreciate it!
It sounds like you are using maximum likelihood estimation. The chi-square you are looking at is for the multiway frequency table of your binary items. This is often of little value with more than 8 items. With maximum likelihood and categorical items, chi-square and related fit statistics are not available because means, variances, and covariances are not sufficient statistics for model estimation.
Lan Huang posted on Wednesday, March 17, 2010 - 10:21 am
Yes. I'm using MLE. Under such circumstance, can you give me any advice on how to justify that my data is unidmensional? All I have is Loglikelihood H0 value, AIC and BIC. Again, thank you so much!
I have several latent variables, all comprised of multiple items. I want to model how these latent variables relate to a single second-order latent factor.
Here is my question: Is there a way to or an advantage to integrating a 3-parameter IRT model with a CFA model with two levels, or could the same conclusion be drawn based on a standard two-level CFA using Mplus? Do you have a reference that discusses this topic?
Hi, I had decided upon a 2pl IRT model, and I now wish to include it in a larger SEM, eg f by y1-y4; f on X;
Once I include the covariates, I only get the IFA output, not IRT. Is it valid to transform the IFA output into IRT output using the formulas described in the IRT webnote? And if so, does this effect the estimates for X? Thanks!
In addition to my question above, should adding a covariate X that predicts f change the values of the difficulties and discriminations in the IRT part of the model? (when compared to parameters in the 2pl model without the f on X statement). Thank
Thankyou for the help. In regards to the IFA to IRT transitions, I prefer to work with discrimination and difficulty parameters rather than the threshold and loadings - so if I do reparameterise the values, is the approach using discrimination = loading/1.7 (for the logit link, where I specify theta variance = 1, mean = 0) still valid once I have a covariate in the model?
Also, forgetting the part about transforming the IFA parameters to IRT parameters for a moment, if I just consider the IFA parameters from a 2pl model (measurement model), and then a 2pl model with a covariate predicting the latent factor (measurement model within SEM model), does this change my measurement model? I notice the values for the thresholds and loadings change between the 2pl and the 2pl within SEM, even though I only have the covariate predicting f in the SEM and nothing else. If I have developed a 2pl model that fits the data in terms of no residual covariances, and assessed the reliabilty etc, wouldn't I want to keep this model exactly the same when it is part of the SEM? Otherwise wouldn't the meaning of the latent variable be different in the SEM?
When you have a covariate in the model, the factor intercept is zero not the factor mean and the factor residual variance is zero not the factor variance. See the FAQ IRT. Toward the bottom of the page there is a link to the formulas you would need.
Yes, values can change. Your model says there are no direct effects between the factor indicators and the covariates. This is a strong assumption.
In regards to the last comment, I ran a model where I allowed a direct relationship between the covariate and each item (i.e. f by y1-y4 f on X y1-y4 on X), but all of the relationships between y's and X were highly non-significant. I think this would be sufficient evidence for setting the direct paths to zero?
Hi, Regarding my posts above, just to clarify, when I have covariates in an IRT model, and wish to calculate a's and b's rather than thresholds and loadings, if I use the formulas from the Technical Appendices, are the factor mean and variance mentioned in the formulas simply those output under 'Sample Statistics for Estimated Factor Scores'? Cheers, O