Message/Author 

ben kübler posted on Tuesday, May 06, 2008  9:57 am



Hi, is M+ using a one dimensional TTest or an two dimensional? 


If you are talking about the ratio of the parameter estimate to the standard error of the parameter estimate in column three of the results, this is a univariate ztest. 

ben kübler posted on Wednesday, May 07, 2008  4:23 am



Yes, i mean the tird column of the Results from the Structural Model "SEM" with the command "Factor on Factor" which shows the significance for the Path. In LISREL its used a TTest so we thought in M+ its used the same Test? 


They call it a ttest in LISREL. It is really an approximate ztest just like in Mplus. 

ben kübler posted on Tuesday, May 13, 2008  7:11 am



Tank you for this informations. So I have to use a tvalue or zvalue table? 


You would use a ztest table. 


I am interested in conducting a simple paired ttest (comparing time1 to time2 on the same variable) while using FIML to handle the missing data. However, I have not been able to find documentation on how to achieve this in Mplus. Is this possible in Mplus? Thank you. 


You can do this using MODEL TEST. MODEL: [y1] (p1); [y2] (p2); MODEL TEST: 0 = p1  p2; 


Thank you very much. I just ran the data as you described. Can you provide some guidance on the values in the output that I should be using to determine whether the pairs are significantly different? Thanks again. 


In the results where fit statistics are given, you will find the results under Wald Test of Parameter Constraints. The null hypothesis is that the parameters are equal. If your pvalue is greater than .05, you cannot reject this hypothesis. 

Jonathan posted on Thursday, November 07, 2013  12:44 pm



Hi there: I know that I can use MODEL TEST to run a Wald test, which is an omnibus test to see if coefficients are different than each other. So I could run MODEL TEST: 0 = a1a2; 0 = a2a3; 0 = a34; To see if any of the relationships I specified are different (please correct me on this if I am wrong). I would like to test to see if *each* relationship is different from each other. So I would like to run something similar to this, but to get separate output for each contrast that I specify. So on the output, I would hopefully have one test statistic for the a1a2 contrast; another for a2a3 contrast; etc. Is this possible in MPlus? Thank you for your time. 


The test you shows above tests all effects together. You would need to run them one at a time if you want separate tests. Alternatively, you could create new parameters in MODEL CONSTRAINT for each difference and you will get a ztest for each one. 

Jon Stange posted on Saturday, December 19, 2015  8:32 pm



I have repeatedmeasures data across four time points, and I would like to conduct pairwise comparisons using FIML for missing data. I have been following the instructions above (from 24Aug2010) for the pairwise comparisons. However, it seems to only be using FIML to estimate missing data based on the maximum number of data points available for either of the variables in the pairwise comparison, rather than for the maximum number of data points in the whole data set (e.g., across times 14). So whereas I have 200 people in the full data set (who have completed time 1 measures), for any given pairwise comparison, even when using FIML for the pairwise comparison I wind up with perhaps 150 people in the comparison (e.g., if I’m using times 2 and 3 which have fewer cases with valid data). Since there are different cases missing at each time point, this means that the different pairwise comparisons I’m running (e.g., T12, T23, T34) aren’t actually on the same individuals. Is there a way to use FIML to estimate missing data for all 200 participants (i.e., using all available data for that variable across times 14, or for all variables in the data set)? Thank you very much for your help. 


You should not have only the two variables on your USEV list, but all variables. If this doesn't help, please send output to Support along with your license number. 

Jon Stange posted on Monday, January 11, 2016  1:29 pm



Thank you very much. I am just following up as I am planning to report these pairedsamples tests in a research paper, and I would like to know what technically this type of statistical test is called, so that I can adequately describe it in the Analysis section. My understanding is that it is essentially a pairedsamples ttest (since I am comparing one measure at two time points), but that it uses FIML. Since (per Linda Muthen's above comment on 8242010) the pvalue I report is under Wald test of parameter constraints (the null hypothesis being that the parameters are equal), I am assuming that this means that this is a structural equation model of some sort. If you have any advice about how to briefly describe this analysis for repeated measures tests, it would be helpful. Releatedly, if you happen to be aware of any papers that have used this method for repeatedmeasures tests, I would appreciate your directing me to them for reference, if you have them at hand. 


You need to analyze all outcomes in a single analysis to get the full benefit from using FIML. I would not stress the analogy with a paired ttest, but simply say that you are testing various detailed aspects of your estimated model. I don't know about papers but I don't think reviewers used to SEM will have objections. 

Jon Stange posted on Wednesday, January 13, 2016  1:12 pm



Thank you. So to clarify possible wording to explain the model: The syntax below tests a structural equation model, and the “estimated model” tests whether the difference between the two variables differs significantly from zero, while using the other variables in the “use variables” list to use FIML for the model based on all available data points. Is that correct? USEVARIABLES ARE y1 p1 z1 z2 z3; MODEL: [y1] (p1); [y2] (p2); MODEL TEST: 0 = p1  p2; 


Correct. I would change and the estimated model tests whether... to and based on the estimated model one can test whether... 

Anne Black posted on Monday, February 27, 2017  11:01 am



Hello, Regarding the comparisons above, how would one differentiate comparison of repeated measures from comparison of means from independent groups? 


For means from independent groups, the mean estimates are not correlated. Mplus figures that out using Model Test in that the p1 and p2 estimates have zero estimated covariance. In other words, the Model Test statement is the same. 

EJ Horberg posted on Sunday, March 25, 2018  6:57 pm



Hello, I would like to do the same analysis as above (testing whether two means in my dataset are significantly different from each other), but with the addition of a covariate. What is the syntax for controlling for a third variable while testing the difference between two means? Thank you. 


With a covariate, you can directly test that the intercepts are equal or not. But to test that the means are equal you have to express the mean of the DV using model parameters and the mean of the covariate, like ymean = int + b*xmean; 

EJH posted on Tuesday, March 27, 2018  8:16 pm



Thank you for your help. I wrote the commands below, hoping to test the difference between VarA and VarB while controlling for our covariate of Gender. The output showed that the Wald test was significant (p = .004). May I interpret that to mean that, when controlling for Gender, there is a significant difference between VarA and VarB? model: VarA VarB on Gender; [VarA] (p1); [VarB] (p2); model test: 0 = p1  p2; 


Yes. 

Back to top 