Fixed vs random coefficient? PreviousNext
Mplus Discussion > Dynamic Structural Equation Modeling >
Message/Author
 Fredrik Falkenström posted on Wednesday, May 30, 2018 - 8:56 am
Hi, I have a question: I first estimated a DSEM model with a regression of Y on X fixed at the within level. The fixed effect of Y on X was estimated -0.94. I then re-estimated the same model, but this time I assigned a random slope for the regression of Y on X. I had thought that the mean of the random slope would be reasonably similar in value to the fixed effect in the previous model, but instead the value was more than twice as large (-1.91). Is this possible, or do you think there must be an estimation error (e.g. poor convergence)?

Best regards,

Fredrik Falkenström
 Tihomir Asparouhov posted on Wednesday, May 30, 2018 - 2:02 pm
If the variable is within only variable (on the within= list) then yes the mean of the random effect should be close to the fixed effect. If it is not however then no. You can do observed centering in V8 to resolve the issue with the centering command, see http://statmodel.com/download/CentMedSlides.pdf
 Fredrik Falkenström posted on Wednesday, May 30, 2018 - 9:40 pm
The variable has both within and between variances, so latent centering was used. The effect I'm studying is a lagged effect, so observed centering won't work due to Nickell's bias. What is the reason for the differing results when latent centering is used? If the issue is complex, perhaps you have some reading suggestions?

Best,

Fredrik
 Tihomir Asparouhov posted on Thursday, May 31, 2018 - 9:38 am
Latent centering is not available in Version 8 with random slope for covariates. In Version 8.1 (coming out next week) we have switched to latent centering for random slopes as well. Only fixed slopes uses latent centering in Version 8. You random slope statement in V 8 uses the hybrid method (uncentered). If you want to make a comparison in V8 between random slope and fixed slope you will have to switch to observed centering for your random slope model ... or best wait until 8.1 is released.
 Fredrik Falkenström posted on Thursday, May 31, 2018 - 12:48 pm
I see, thanks, I had no idea latent centering didn't work when there was a random slope! I'll wait until version 8.1 is released then.

Fredrik
 Bengt O. Muthen posted on Thursday, May 31, 2018 - 3:31 pm
It's mentioned in the UG on pages 278-279. But in version 8.1 we print a warning message to alert users to ML doing this and mentioning the Bayes alternative with its full latent variable decomposition.
 Fredrik Falkenström posted on Wednesday, June 13, 2018 - 2:10 am
Hi again, I've now downloaded version 8.1, but I still get different estimates for the mean of the random slope in a random coefficient model than the fixed effect from a fixed coefficient model. The mean of the random slope is almost 50% larger than the fixed effect estimate. The model I am using is fairly simple. N = 27 and T = 20.

Fixed coefficient model:
%WITHIN%
Y ON Y&1 X;

%BETWEEN%
Y; X;
Y with X;

Random coefficient model:
%WITHIN%
Y ON Y&1;
S1 | Y on X;

%BETWEEN%
Y; X; S1;
Y with X S1;
X with S1;
 Tihomir Asparouhov posted on Wednesday, June 13, 2018 - 9:59 am
Please send the example to support@statmodel.com

I have never seen anything like that. They are not supposed to be exactly identical but 50% seems too high, but at N=27 that difference is most likely not significant and would disappear for larger samples.

You can switch to the RDSEM framework and compare the models, so replace
Y ON Y&1; with Y^ ON Y^1;
When you do that you can also compare the above to models to the regular multilevel models (with and without random effect).

You can also look at S factor scores to see where the difference comes from.
 Jamie Griffith posted on Wednesday, June 27, 2018 - 11:19 am
Dear Mplus

I am conducting analyses of pain and urinary symptoms rated 0-10 collected every 2 weeks over 23 visits. I am following the analyses that you demonstrated in your webinar "Intensive Longitudinal Data Analysis Using Mplus"

I have four questions, which I hope you can speak to:

1) I understand that the estimation is Bayesian. Is it possible to conduct ML estimation for a frequentist analysis?

2) There were many clusters (individual subjects) flagged with the error:
WARNING: PROBLEMS OCCURRED IN SEVERAL ITERATIONS IN THE COMPUTATION OF THE STANDARDIZED ESTIMATES FOR SEVERAL
CLUSTERS. etc...

Would you recommend further diagnostics to ameliorate this warning, or is this normal behaviour in these analyses?

3) I understand that the priors are either Inverse-Wishart or Gaussian distributions.

Are these "informative" priors, and if so, do you have a recommendation of a sensitivity analysis?

4) Finally, I ran the analysis with the maximum number of iterations with no thinning. The PSR seems to stabilise at 1.1014, which is the value at iteration 50000, but there are some iterations with a slightly lower value (e.g., 1.013 at iteration 48500).

Thanks so much for your insights.

Warm wishes

Jamie
 Tihomir Asparouhov posted on Wednesday, June 27, 2018 - 2:26 pm
1) Not at this point, but Bayes and ML are equivalent asymptotically and even for small samples usually yield estimates that are very close

2) It is not unusual. You do not say what your sample size is but if it is small, autoregressive parameters can have wide distributions which go out of bounds and result in such messages. You should also consider the issue of trends - if there are trends in the data you should model it, and possibly switch to RDSEM, i.e.
change Y on Y&1 to
Y on Time; Y^ on Y^1;
If you need more information on RDSEM, see
http://www.statmodel.com/download/RDSEM.pdf

3) The priors are uninformative and are reported in tech8. We do not discourage you to do sensitivity analysis but usually this is the last issue you want to take care of rather than the first. In most cases our default priors would be good enough.

4) These number of iterations are very high. Consider simplifying the model down from what you have - if random effects have near zero variance convert them to fixed, simplify the autoregressive structure. The fact that the number of iterations is so high before PSR goes down indicates that the model could be somewhat poorly identified. This is why looking at simplifying the model would be great.
 Jamie Griffith posted on Thursday, June 28, 2018 - 4:11 pm
Dear Tihomir

Thanks for the responses. If it helps, the sample size is 370 and the total number of observations is 8210. I have sent the files to support@statmodel.com in case you want to look at other details.

There is already an autoregression built into the model (lag 1), but we will look into the RDSEM.

Regarding my question #4, I thought that the number of FBITERATIONS are fixed by the user. I had started with 5000 (the same as in the webinar). Although the model converged, there was a warning message : USE THE FBITERATIONS OPTION TO INCREASE THE NUMBER OF ITERATIONS BY A FACTOR OF AT LEAST TWO TO CHECK CONVERGENCE AND THAT THE PSR VALUE DOES NOT INCREASE.

I increased the FBITERATIONS to Based on the message; after 10000 iterations PSR was 1.016; after 50000 it was 1.014. So, it does not increase from 5000 to 10000, but I'm not sure if I'm missing any other problems with the model. I will definitely look into simplifying it.
 Jamie Griffith posted on Thursday, August 30, 2018 - 12:25 pm
Dear Tihomir

I have done the RDSEM as you suggested (see input below). Could I pose some questions to check the model? As a reminder, this is looking at autoregressions of pain and urinary symptoms with visits every 2 weeks, which below is specified using TINTERVAL.

1) Is it normal that the WITHIN and BETWEEN lines are empty?
2) I have added random slopes, sp and su in addition to the autoregressions. Am I correct to understand that these are all estimated simultaneously such that the overall trend is statistically controlling for the autoregressions and vice versa?

Thanks, Jamie Griffith

TITLE: M070 RDSEM pain and urinary symptoms with AR(1) and random slopes
VARIABLE:
WITHIN = ;
BETWEEN = ;
CLUSTER = pid;
LAGGED = pain(1) urin(1);
USEVARIABLES ARE
pid vnum pain urin;
MISSING ARE ALL (-999);
TINTERVAL = vnum(1);

ANALYSIS:
TYPE = TWOLEVEL RANDOM;
ESTIMATOR = BAYES;
BITERATIONS = (5000);
MODEL:
%WITHIN%
spp | pain^ ON pain^1;
suu | urin^ ON urin^1;
spu | pain^ ON urin^1;
sup | urin^ ON pain^1;

sp | pain;
su | urin;

%BETWEEN%
sp su spp suu spu sup pain urin WITH
sp su spp suu spu sup pain urin;
 Bengt O. Muthen posted on Friday, August 31, 2018 - 12:14 pm
1) Yes, because the variables vary on both levels.

2) Yes. But you mention "trend" and that suggests a need to regress on time like in growth modeling:

%Within%

trend | pain on time;

and same for urin.

Here you specify Within=time.
 Jamie Griffith posted on Tuesday, September 04, 2018 - 1:14 pm
Dear Bengt

Thank you. So I added random slopes as you suggested:

sp | pain ON time;
su | urin ON time;

Our "timing" variable was visit number (vnum, spaced every two weeks). I used
DEFINE: time = vnum;
"time" is now a WITHIN variable and TINTERVAL was set to vnum (1).

The model runs and converges no problem.

My only question is whether I have set this up correctly (i.e., using vnum for TINTERVAL and "time" - which is a copy of vnum - in the modelling).

I just want to ensure I haven't missed a detail...Thank you!

Cheers

Jamie Griffith

Selected input:
VARIABLE:
NAMES ARE pid vnum pain urin;
WITHIN = time;
BETWEEN =;
CLUSTER = pid;
LAGGED = pain(1) urin(1);
USEVARIABLES ARE
pid vnum pain urin time
MISSING ARE ALL (-999);
TINTERVAL = vnum(1);

DEFINE:
time = vnum;

ANALYSIS:
TYPE = TWOLEVEL RANDOM;
ESTIMATOR = BAYES;
BITERATIONS = (5000);

MODEL:
%WITHIN%
spp | pain^ ON pain^1;
suu | urin^ ON urin^1;
spu | pain^ ON urin^1;
sup | urin^ ON pain^1;

sp | pain ON time;
su | urin ON time;

%BETWEEN%
sp su spp suu spu sup pain urin WITH
sp su spp suu spu sup pain urin;
 Bengt O. Muthen posted on Tuesday, September 04, 2018 - 2:42 pm
This looks correct. As long as the vnum values reflect the time passage correctly between visits (e.g. the different between vnum 14 and 7 should reflect the same time difference as between 8 and 1).

Note also that the form of the trend can be checked by a cross-classified run in line with the video and handout for Short Course Topic 12, part 3 on our website.
 Jamie Griffith posted on Wednesday, September 05, 2018 - 2:31 pm
Dear Bengt

Thanks so much. Indeed vnum is spaced every two weeks, so each 1 unit is a fortnight.

I will look into the cross-classified analysis as well - Thanks for pointing me in this direction!

With much appreciation, best regards

Jamie
 Bengt O. Muthen posted on Thursday, September 06, 2018 - 3:55 pm
Hope it works out so you can send a paper on it.
 Jamie Griffith posted on Monday, September 10, 2018 - 11:45 am
Dear Bengt

I will be happy to send the paper when we are finished. I am still looking into the cross-classified approach, but for now the RSDEM approach seems to work very well.

I will keep you posted!

Thanks to you, Linda, Tihomir, and the rest of the Mplus team.

Cheers

Jamie
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a public posting area. Enter your username and password if you have an account. Otherwise, enter your full name as your username and leave the password blank. Your e-mail address is optional.
Password:
E-mail:
Options: Enable HTML code in message
Automatically activate URLs in message
Action: