I have a 4-indicator measurement model that performed well in the "between" analysis, but not in the "within" analysis. I either get a model that will not converge, or I get negative signs on some of the indicators. I have tried setting lamdas to 1 on different indicators, I have tried different start values for the indicators, and I have tried different combinations of correlated measurement errors among the indicators, but nothing seems to be working. Any suggestions would be appreciated.
bmuthen posted on Saturday, September 23, 2000 - 2:34 am
Unless you already tried this, a regular analysis of the pooled-within sample covariance matrix printed by Mplus is a good way to examine the within structure. Here, usual procedures for finding a well-fitting model can be used. Since there are only 4 indicators, it is easy to look at the sample covariances and see if negative loadings should happen. Check also that no variable has a considerably larger variance than the others.
Anonymous posted on Tuesday, May 25, 2004 - 7:12 am
I am tring a two-level CFA model with four latent vaiables which are measured by 3 or 4 indicators each. As suggested by Muthén 1994, in the first step, I am runing a pooling single-level model, but several between-level indicators have more than 1 Lemda and several are negative. Followed by the output suggestion, I fixed the highest loading factor of each latent variable =1, but the ouptput shows: NO CONVERGENCE. NUMBER OF ITERATIONS EXCEEDED. So what's the probem and how should I solve it?
in addition, the second setp in checking ICC goes well, and the third step in running the pooling within in model also comes out results reasonable. But the final step for 2-level analysis comes out a FATAL ERROR THERE IS NOT ENOUGH MEMORY SPACE TO RUN THE PROGRAM ON THE CURRENT INPUT FILE. THE ANALYSIS REQUIRES 4 DIMENSIONS OF INTEGRATION RESULTING IN A TOTAL OF 50625 INTEGRATION POINTS. THIS MAY BE THE CAUSE OF THE MEMORY SHORTAGE. YOU CAN TRY TO FREE UP SOME MEMORY BY CLOSING OTHER APPLICATIONS THAT ARE CURRENTLY RUNNING. ANOTHER SUGGESTION IS CLEANING UP YOUR HARD DRIVE BY DELETING UNNECESSARY FILES.
But currently, my computer has space available about 20G, is it still not enough, or should I make some correction of my model.
The failure of your first step indicates that you should modify your model, perhaps using exploratory factor analysis. Given the fatal error message in your final step, it sounds like you have specified some of your indicators as categorical. The 4 steps in Muthen (1984) were intended for continuous outcomes. The 4-dimensional integration in the final step is space demanding because the combination of the many integration points and a large sample size creates a huge space need. You can reduce to say 10 integration points per dimension by using the option integration = 10 (see User's Guide). But first work on step 1.
Anonymous posted on Wednesday, May 26, 2004 - 9:19 am
Thanks for your quick answer. Yes I have several cateogrical indicators at between level. But could I still follow the 4-step just by figuring out that some of them are categorical in the variable command?
And I tried the EFA of pooling model by exploring 5-6 factors, but I got the following results:
THE INPUT SAMPLE CORRELATION MATRIX IS NOT POSITIVE DEFINITE. THE ESTIMATES GIVEN BELOW ARE STILL VALID. PROBLEM OCCURRED IN EXPLORATORY ANALYSIS WITH 5 FACTOR(S). THE PROGRAM HAD PROBLEMS MINIMIZING THE FUNCTION. TOO MANY FACTORS HAVE PROBABLY BEEN USED.
But based on theory (also because 2-level analysis demands seperate factors at both levels), I need at least five factors in the pooling models. So any suggestion?
bmuthen posted on Thursday, May 27, 2004 - 11:07 am
When you say "pooling model" I think you mean Step 1 analysis of the total covariance (correlation) matrix, referred to as S_T in my paper; the word "pooling" is a bit confusing since analysis of the pooled-within covariance matrix is Step 3.
If you hypothesize 4 within factors and 1 between factor, this does not mean that you necessarily would get 5 factors in Step 1; you may get less because within and between factors get a bit confounded. The EFA error message suggests that less than 5 factors should be attempted. And, even if theory says you should get 4 within factors does not mean that your factor analysis confirms that. Less than perfect validity of the measurements may distort the intent of the theory. It is important to settle these Step 1 matters before continuing.
Anonymous posted on Friday, May 28, 2004 - 12:33 am
Yes, in the setp 1, I am doing S_T. And I exactly got what you mentioned problem of "confounded" not only at the two levels, also confounded with the outcome indicators (the outcome variable is also a latent variable with 3 continuous indicators), so I have no idea what's the use of doing step 1. I cannot get the valide factors at both levels and even the indicators of independent variables are confounded with that of outcomes. Does it mean that my model is problematic?
I tried to just drop off some misspecifications resulted by step 1 and go to step 4 (step 2 and 3 works good), but step 4 does not work. I also try S_B, it seems that problem comes from the S_B. so should I also run EFA at between?
Thank you very much for the time and helps.
bmuthen posted on Saturday, May 29, 2004 - 10:53 am
Regarding the question in your first paragraph - yes, if the loadings of your outcome indicators don't behave well your model is most likely problematic. If you haven't already, I would go back and look at the EFA here.
Regarding your question in the last paragraph, EFA on the estimated Sigma(B) is useful if you have many clusters (say > 50) - see Muthen (1994).
eric duku posted on Thursday, January 12, 2006 - 11:54 am
I am running a 2-level CFA with one latent factor and 26 items. I have run successfully the first 3 steps as suggested by Muthen (1984). On the final 2-level model, I get the following error message
THE LOGLIKELIHOOD DECREASED IN THE LAST EM ITERATION. CHANGE YOUR MODEL AND/OR STARTING VALUES.
THE MODEL ESTIMATION DID NOT TERMINATE NORMALLY DUE TO AN ERROR IN THE COMPUTATION. CHANGE YOUR MODEL AND/OR STARTING VALUES.
I used starting values from the between and within models to no avail. My next option is to try EFA to see what I get.
Hello! I am fitting a measurement model, and then conducting an SEM, for a measure that has three latent factors (with approximately 6 indicators each), and one second order factor--modeled at both the within and the between.
When I run just the measurement model, it looks good--but when I add covariates at level 1, some of the factor loadings at the between level become non-significant, the level 2 fit decreases, etc. Is this because at the between level, we are modeling the random intercepts of each item as factor indicators, and so as level 1 multicollinearity is introduced, or more level 1 variance is explained, those random intercepts change? Or is this more likely a mistake in my syntax? In other words, is this a feature, or a bug?