Standard errors of zero using R3STEP PreviousNext
Mplus Discussion > Latent Variable Mixture Modeling >
 Michelle Colder Carras posted on Thursday, August 14, 2014 - 5:53 am
Im doing LCA with R3STEP using SUBPOPULATION with some missing data and finding that many standard errors are 0 in the latent class regression. The regression drops a total of 58 observations and uses 651. This problem occurs when using 8 covariates (2 binary, 2 categorical, 4 standardized continuous) and 6 categorical latent class indicators.

All of the categorical covariates have standard errors of 0 in each of the 3 multinomial regressions no matter which class is used as the reference class. For the continuous covariates, this happens only sometimes. For example, when using class 4 as the reference class I get:
C#1 ON

DEPMST1 0.287 0.000 999.000 0.000
LONMST1 0.339 0.181 1.872 0.061

C#2 ON

DEPMST1 -0.460 0.137 -3.361 0.001
LONMST1 0.416 0.000 999.000 0.000

When I attempt to add an additional 3 continuous covariates, an additional 6 observations are dropped and even more SEs become 0. E.g., when using class 4 as the reference class, the SEs for the coefficients for the regressions of both DEPMST1 and LONMST1 are now 0 in the C#1 regression, but the SE is still calculated for DEPMST1 in the regression of C#2.

Thank you for any insight/help you might have.

-Michelle Carras
 Linda K. Muthen posted on Thursday, August 14, 2014 - 9:04 am
It looks like that parameters are being fixed not that the standard errors are zero. For further information, send the output and your license number to
 Sabrina Twilhaar posted on Monday, March 09, 2020 - 4:18 am
Based on the literature indicating a non-linear relation between X and Y, I would like to include a quadratic term in my LPA model that I'm analyzing using R3STEP (automatic). X is measured in full weeks (range: 24-31). When I start with a univariate model only including X everything goes fine. When I include X + X^2, the SE of the intercepts become zero (and Est./SE ***** and 999). So something goes wrong here. Similarly, when I include X and X^2 in my full model (9 predictors in total, n=1977), SEs and p-values of all predictors become very small. This is not true when X^2 is not included in the model.

I use R3STEP with type=imputation in the data command. I excluded X and X^2 from the 'use variables' list and included them as auxiliary variables during imputation. There are no missing values in X and X^2 and all correlations between predictors are <0.4.

I hope someone has an idea about what I'm doing wrong. Thanks a lot in advance!
 Sabrina Twilhaar posted on Monday, March 09, 2020 - 7:04 am
I might found the solution, but I am not sure.

I think it might have to do with the fact that I used raw instead of orthogonal polynomials. Using orthogonal polynomials at least solved the problems I described above.

Do you think this makes sense?
 Bengt O. Muthen posted on Monday, March 09, 2020 - 3:51 pm
Try using subtracting the mean Xbar in creating X and X^2:

X - Xbar

(X-Xbar)*(X -Xbar)

Also, even if X and Y have a nonlinear relationship, this doesn't mean that X and C (the latent class variable) has such a relationship.
 Sabrina Twilhaar posted on Tuesday, March 10, 2020 - 4:13 am
Thank you very much. This seems a more straightforward option and it gives me basically the same results (in terms of SE and p-values) as with the orthogonal polynomials. As you correctly noted, different from the relation between X and Y, the relation between X and C is best described using the linear term.
 Bengt O. Muthen posted on Tuesday, March 10, 2020 - 10:23 am
Back to top
Add Your Message Here
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Options: Enable HTML code in message
Automatically activate URLs in message