Influence Diagnostics in Mplus PreviousNext
Mplus Discussion > Structural Equation Modeling >
 Phil Wood posted on Thursday, February 15, 2007 - 7:03 am
Hi folks,
For the general education of my students, I did a small two-predictor regression in SAS and then in Mplus:
analysis: type=meanstructure
gpa on hsrank act;
hsrank with act;
[hsrank* act* gpa*]
output: residual tech1 standardized;
savedata: file=admissioninf.txt; save is
loglikelihood influence cooks;
I notice that Cooks D from Mplus doesn't match that reported by SAS-(As a matter of fact, the values only correlate .86)
Any ideas as to why? Thanks!
 Linda K. Muthen posted on Thursday, February 15, 2007 - 9:05 am
I would need to see the SAS output and the Mplus input, data, and output to explain this. Please send these files to
 Bert Weijters posted on Friday, February 13, 2009 - 12:39 pm
I'm running alternative CFA models and want to identify individuals to whom specific models do/don't apply.
1. Is there a way to get Mahalanobis distances and loglikelihood contributions that are model specific (e.g., mahalanobis d using the implied covariance matrix)?

2. Where can I find more information on how the influence variable is computed exactly? The distribution of the influence scores is somewhat awkward (e.g., 8 highly negative values, then a major cluster of respondents close to but above zero, and then a group of positive values). Can I detect if a value is not valid because of model nonconvergence? I suspect the models may converge to different local optima (sample sizes vary around 50).
 Linda K. Muthen posted on Sunday, February 15, 2009 - 10:52 am
See pages 606-607 of the Mplus User's Guide and the references mentioned there.
 Bert Weijters posted on Monday, February 23, 2009 - 8:37 am
I presume that the INFLUENCE command reports the difference in the value of the optimized criterion when a case is deleted. The question I still have is: which criterion?

(I checked the reference by Cook & Weisberg (1982) (free download at I compared individual influence values to model fit results including/excluding the case I'm looking at, but I can't find the link. According to what I see it's not the change in LL, -2LL, chiČ, unless I'm doing something wrong.)
 Bengt O. Muthen posted on Monday, February 23, 2009 - 11:14 am
The criterion is the loglikelihood.

You exclude case i and compute the estimates. Then you look at the L1-L2(i)
where L1 is the usual ML likelihood based on all the data while L2(i) is the likelihood based on all the data evaluated at the i-th parameter estimates.
 Phil Wood posted on Wednesday, August 03, 2011 - 11:36 am
Would it be possible to get influence diagnostics (such as mahalanobis or influence) for Bayesian models? I was just wondering, given that they're an option for ML. Do you recommend that we calculate these in ML and then use that as a basis for excluding observations in Bayes?
 Linda K. Muthen posted on Wednesday, August 03, 2011 - 3:20 pm
These types of diagnostics are not currently available with Bayes. Looking at ML would be one alternative.
Back to top
Add Your Message Here
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Options: Enable HTML code in message
Automatically activate URLs in message