How to do 3-PL or 4-PL IRT model
Message/Author
 Caoshang posted on Saturday, May 07, 2011 - 11:23 pm
Dear professor:
I want to do the 3-parameter IRT model:
a= item difficulty
b= item discrimination
c= item ¡°guessing¡±

4-parameter model:
add d= item 'carelessness'

Can you give me some examples or advice of how to write the code to set this up in Mplus.i can't get any useful information ,especially typically examples from the Internet.

thank you for your patience.
 Linda K. Muthen posted on Sunday, May 08, 2011 - 6:22 am
Mplus handles only a and b. For this, see Example 5.5.
 Matthew E Foster posted on Thursday, April 09, 2015 - 12:25 pm
Since this post in 2011, has Mplus added features so that it can estimate a 3 parameter logistic model?

Thank you for your time.
Matt
 Bengt O. Muthen posted on Thursday, April 09, 2015 - 12:58 pm
It's in development.
 Ali posted on Wednesday, September 21, 2016 - 9:16 am
Hello
I am trying to fit the data in the 3pl-IRT model and the mixture 3-pl model. I read the MplusIRT paper, but I was quite confused. So, I have a few questions.
(1) For the parameter parameterization, are the coefficient beta as the same as lambda and tau_1 as the tau in the equation 20(p.5) in the MplusIRT paper? If it is , =a(item discrimination)=lambda, b(item difficulties)=tau_1/lambda, c(guessing parameter)= tau_2
(2) For the codes, I am not pretty sure if my codes are right in the mixture 3pl. I have 32 dichotomous items. In the codes, I fixed both latent classes having mean as 0 and variance 1, but the loadings, and thresholds (i.e. tau_1 and tau_2) were freely estimated. The cods as following USEVARIABLES ARE U1-U32;
CATEGORICAL = U1-U32(3pl);
CLASSES = c(2);
ANALYSIS: TYPE = MIXTURE;
ALGORITHM=INTEGRATION;
STARTS = 100 20;
processor=2;
MODEL:%OVERALL%
f BY U1-U32*;
f@0;
f@1;
%c#1%
[f@0];
f@1;
[u1\$1-u32\$1];
[u1\$2-u32\$2];
%c#2%
[f@0];
f@1;
[u1\$1-u32\$1];
[u1\$2-u32\$2];
 Bengt O. Muthen posted on Wednesday, September 21, 2016 - 11:20 am
(1) Yes, beta in (39) is the same as lambda in (20). IRT translations to discrimination a is shown in (21) and difficulty b is shown in (22). The guessing (c) translation is shown in (40).

Input looks ok except you forgot to put a bracket around the first instance of f@0.

Also, you will likely need priors for the guessing parameters as shown in our IRT document.
 Ali posted on Thursday, September 22, 2016 - 10:58 am
Thanks!
As for setting a prior for a guessing parameter, should I set a prior as the same as you set tau_2 from N(1,1).
Suppose I have 32 items and 2 latent classes,then I set each tau_2's prior as N(1,1)for each item and each latent class . Does it mean that I have to set 64 priors from N(1,1)for tau_2 ?
 Bengt O. Muthen posted on Thursday, September 22, 2016 - 6:09 pm
Right. Use the list function. And try one class first.
 Ali posted on Saturday, September 24, 2016 - 4:13 am
I tried to set a prior for the guessing parameter in the second class. But,it gave me the warning message"WARNING:THE BEST LOGLIKELIHOOD VALUE WAS NOT REPLICATED.THE
SOLUTION MAY NOT BE TRUSTWORTHY DUE TO LOCAL MAXIMA. INCREASE THE NUMBER OF RANDOM STARTS." Could it suggest that mixture 3pl didn't fit my data? And, I should fit the mixture 2pl or mixture 1pl.

Here is my codes:
MODEL: %OVERALL%
f BY U1-U32*;
[f@0];
f@1;
%c#1%
[f@0];
f@1;
[u1\$1-u32\$1];
[u1\$2-u32\$2];
%c#2%
[f@0];
f@1;
[u1\$1-u32\$1];
[u1\$2-u32\$2](p1-p32);

MODEL PRIORS:
p1-p32~N(1,1);
OUTPUT: TECH1 TECH8;
 Linda K. Muthen posted on Saturday, September 24, 2016 - 2:19 pm
You need to increase the number of random starts. You did not reach a global solution. Try STARTS = 200 50; or more. The second number should be approximately 1/4 of the first.
 Ali posted on Thursday, September 29, 2016 - 3:40 am
I used starts =200 50, and Mplus ran 24 hours. But, it has the same warning message"WARNING: THE BEST LOGLIKELIHOOD VALUE WAS NOT REPLICATED. THE SOLUTION MAY NOT BE TRUSTWORTHY DUE TO LOCAL MAXIMA. INCREASE THE NUMBER OF RANDOM STARTS."
 Bengt O. Muthen posted on Thursday, September 29, 2016 - 8:44 am
I assume your 1-class 3PL model ran fine and showed significant guessing. If not, there is no use in going to more classes. If yes, your non-replication of the logL indicates that the likelihood is too flat which means that there is not enough information in the data to define a 2-class 3PL model.
 Tammy Tolar posted on Wednesday, January 25, 2017 - 9:07 am
I am running a 3PL Mixture Model with priors on my laptop. I am currently in day 3 of running the model (200 starts, 50 optimizations, 3 processors; ~60,000 observations, a subset of a data file of ~350,000 observations). It is currently running the optimizations.

Assuming I get something reasonable out of this, I will have 3 more versions of the same model to run (and if I do not get something reasonable even more models to run), so to reduce overall time to get the analyses done, I am trying to run another model on a remote server but can not get it to work. To try to diagnose the problem, I tried running the exact same model that is currently running on my laptop (exact same input and data files). I get the following error:

*** ERROR
Categorical variable set with E7 contains less than 2 categories.

To me, it seems like the data file is being misread. If I remove all the 3pl related code and reduce it to a 2PL model, it reads the file fine (the reported category frequencies are accurate) and runs the model.

Is there something in running a 3PL model that requires access to certain types of computer resources (unique from other models) that may be causing this problem? The remote server has more restrictions on it because it is a shared, network resource, so . . .? Or do you have any other suggestions for trying to troubleshoot this problem?
 Linda K. Muthen posted on Thursday, January 26, 2017 - 6:46 am
It is likely the data are being misread. The most common reasons for this are blanks in the data set or that the number of variable names is not the same as the number of columns in the data set. If you can't see the problem, send your files and license number to support@statmodel.com.