How to do 3-PL or 4-PL IRT model PreviousNext
Mplus Discussion > Categorical Data Modeling >
Message/Author
 Caoshang posted on Saturday, May 07, 2011 - 11:23 pm
Dear professor:
I want to do the 3-parameter IRT model:
a= item difficulty
b= item discrimination
c= item ¡°guessing¡±

4-parameter model:
add d= item 'carelessness'

Can you give me some examples or advice of how to write the code to set this up in Mplus.i can't get any useful information ,especially typically examples from the Internet.

thank you for your patience.
 Linda K. Muthen posted on Sunday, May 08, 2011 - 6:22 am
Mplus handles only a and b. For this, see Example 5.5.
 Matthew E Foster posted on Thursday, April 09, 2015 - 12:25 pm
Since this post in 2011, has Mplus added features so that it can estimate a 3 parameter logistic model?

Thank you for your time.
Matt
 Bengt O. Muthen posted on Thursday, April 09, 2015 - 12:58 pm
It's in development.
 Ali posted on Wednesday, September 21, 2016 - 9:16 am
Hello
I am trying to fit the data in the 3pl-IRT model and the mixture 3-pl model. I read the MplusIRT paper, but I was quite confused. So, I have a few questions.
(1) For the parameter parameterization, are the coefficient beta as the same as lambda and tau_1 as the tau in the equation 20(p.5) in the MplusIRT paper? If it is , =a(item discrimination)=lambda, b(item difficulties)=tau_1/lambda, c(guessing parameter)= tau_2
(2) For the codes, I am not pretty sure if my codes are right in the mixture 3pl. I have 32 dichotomous items. In the codes, I fixed both latent classes having mean as 0 and variance 1, but the loadings, and thresholds (i.e. tau_1 and tau_2) were freely estimated. The cods as following USEVARIABLES ARE U1-U32;
CATEGORICAL = U1-U32(3pl);
CLASSES = c(2);
ANALYSIS: TYPE = MIXTURE;
ALGORITHM=INTEGRATION;
STARTS = 100 20;
processor=2;
MODEL:%OVERALL%
f BY U1-U32*;
f@0;
f@1;
%c#1%
[f@0];
f@1;
[u1$1-u32$1];
[u1$2-u32$2];
%c#2%
[f@0];
f@1;
[u1$1-u32$1];
[u1$2-u32$2];
 Bengt O. Muthen posted on Wednesday, September 21, 2016 - 11:20 am
(1) Yes, beta in (39) is the same as lambda in (20). IRT translations to discrimination a is shown in (21) and difficulty b is shown in (22). The guessing (c) translation is shown in (40).

Input looks ok except you forgot to put a bracket around the first instance of f@0.

Also, you will likely need priors for the guessing parameters as shown in our IRT document.
 Ali posted on Thursday, September 22, 2016 - 10:58 am
Thanks!
As for setting a prior for a guessing parameter, should I set a prior as the same as you set tau_2 from N(1,1).
Suppose I have 32 items and 2 latent classes,then I set each tau_2's prior as N(1,1)for each item and each latent class . Does it mean that I have to set 64 priors from N(1,1)for tau_2 ?
 Bengt O. Muthen posted on Thursday, September 22, 2016 - 6:09 pm
Right. Use the list function. And try one class first.
 Ali posted on Saturday, September 24, 2016 - 4:13 am
I tried to set a prior for the guessing parameter in the second class. But,it gave me the warning message"WARNING:THE BEST LOGLIKELIHOOD VALUE WAS NOT REPLICATED.THE
SOLUTION MAY NOT BE TRUSTWORTHY DUE TO LOCAL MAXIMA. INCREASE THE NUMBER OF RANDOM STARTS." Could it suggest that mixture 3pl didn't fit my data? And, I should fit the mixture 2pl or mixture 1pl.

Here is my codes:
MODEL: %OVERALL%
f BY U1-U32*;
[f@0];
f@1;
%c#1%
[f@0];
f@1;
[u1$1-u32$1];
[u1$2-u32$2];
%c#2%
[f@0];
f@1;
[u1$1-u32$1];
[u1$2-u32$2](p1-p32);

MODEL PRIORS:
p1-p32~N(1,1);
OUTPUT: TECH1 TECH8;
 Linda K. Muthen posted on Saturday, September 24, 2016 - 2:19 pm
You need to increase the number of random starts. You did not reach a global solution. Try STARTS = 200 50; or more. The second number should be approximately 1/4 of the first.
 Ali posted on Thursday, September 29, 2016 - 3:40 am
I used starts =200 50, and Mplus ran 24 hours. But, it has the same warning message"WARNING: THE BEST LOGLIKELIHOOD VALUE WAS NOT REPLICATED. THE SOLUTION MAY NOT BE TRUSTWORTHY DUE TO LOCAL MAXIMA. INCREASE THE NUMBER OF RANDOM STARTS."
 Bengt O. Muthen posted on Thursday, September 29, 2016 - 8:44 am
I assume your 1-class 3PL model ran fine and showed significant guessing. If not, there is no use in going to more classes. If yes, your non-replication of the logL indicates that the likelihood is too flat which means that there is not enough information in the data to define a 2-class 3PL model.
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: