Message/Author |
|
Phil Wood posted on Thursday, February 15, 2007 - 7:03 am
|
|
|
Hi folks, For the general education of my students, I did a small two-predictor regression in SAS and then in Mplus: analysis: type=meanstructure Model: gpa on hsrank act; hsrank with act; [hsrank* act* gpa*] output: residual tech1 standardized; savedata: file=admissioninf.txt; save is mahalanobis loglikelihood influence cooks; I notice that Cooks D from Mplus doesn't match that reported by SAS-(As a matter of fact, the values only correlate .86) Any ideas as to why? Thanks! thanks! |
|
|
I would need to see the SAS output and the Mplus input, data, and output to explain this. Please send these files to support@statmodel.com. |
|
|
I'm running alternative CFA models and want to identify individuals to whom specific models do/don't apply. 1. Is there a way to get Mahalanobis distances and loglikelihood contributions that are model specific (e.g., mahalanobis d using the implied covariance matrix)? 2. Where can I find more information on how the influence variable is computed exactly? The distribution of the influence scores is somewhat awkward (e.g., 8 highly negative values, then a major cluster of respondents close to but above zero, and then a group of positive values). Can I detect if a value is not valid because of model nonconvergence? I suspect the models may converge to different local optima (sample sizes vary around 50). |
|
|
See pages 606-607 of the Mplus User's Guide and the references mentioned there. |
|
|
I presume that the INFLUENCE command reports the difference in the value of the optimized criterion when a case is deleted. The question I still have is: which criterion? (I checked the reference by Cook & Weisberg (1982) (free download at http://conservancy.umn.edu/handle/37076). I compared individual influence values to model fit results including/excluding the case I'm looking at, but I can't find the link. According to what I see it's not the change in LL, -2LL, chiČ, unless I'm doing something wrong.) |
|
|
The criterion is the loglikelihood. You exclude case i and compute the estimates. Then you look at the L1-L2(i) where L1 is the usual ML likelihood based on all the data while L2(i) is the likelihood based on all the data evaluated at the i-th parameter estimates. |
|
Phil Wood posted on Wednesday, August 03, 2011 - 11:36 am
|
|
|
Would it be possible to get influence diagnostics (such as mahalanobis or influence) for Bayesian models? I was just wondering, given that they're an option for ML. Do you recommend that we calculate these in ML and then use that as a basis for excluding observations in Bayes? thanks! |
|
|
These types of diagnostics are not currently available with Bayes. Looking at ML would be one alternative. |
|
Back to top |