NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 9 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Park, Sunyoung; Natasha Beretvas, S. – Journal of Experimental Education, 2021
When selecting a multilevel model to fit to a dataset, it is important to choose both a model that best matches characteristics of the data's structure, but also to include the appropriate fixed and random effects parameters. For example, when researchers analyze clustered data (e.g., students nested within schools), the multilevel model can be…
Descriptors: Hierarchical Linear Modeling, Statistical Significance, Multivariate Analysis, Monte Carlo Methods
Peer reviewed Peer reviewed
Olejnik, Stephen; Mills, Jamie; Keselman, Harvey – Journal of Experimental Education, 2000
Evaluated the use of Mallow's C(p) and Wherry's adjusted R squared (R. Wherry, 1931) statistics to select a final model from a pool of model solutions using computer generated data. Neither statistic identified the underlying regression model any better than, and usually less well than, the stepwise selection method, which itself was poor for…
Descriptors: Computer Simulation, Models, Regression (Statistics), Selection
Peer reviewed Peer reviewed
Meshbane, Alice; Morris, John D. – Journal of Experimental Education, 1995
A method for comparing the cross-validated classification accuracies of linear and quadratic classification rules is presented under varying data conditions for the "k"-group classification problem. Separate-group and total-group proportions of correct classifications can be compared for the two rules, as is illustrated. (Author/SLD)
Descriptors: Classification, Comparative Analysis, Discriminant Analysis, Equations (Mathematics)
Peer reviewed Peer reviewed
Kromrey, Jeffrey D.; La Rocca, Michela A. – Journal of Experimental Education, 1995
The Type I error rates and statistical power of nine selected multiple comparison procedures were compared in a Monte Carlo study. The Peretz, Ryan, and Fisher-Hayter tests were the most powerful, and differences among these procedures were consistently small. Choosing among these procedures might be based on their calculational complexity. (SLD)
Descriptors: Comparative Analysis, Computation, Monte Carlo Methods, Power (Statistics)
Peer reviewed Peer reviewed
Leung, Shing On; Sachs, John – Journal of Experimental Education, 2005
Quite often in data reduction, it is more meaningful and economical to select a subset of variables instead of reducing the dimensionality of the variable space with principal components analysis. The authors present a neglected method for variable selection called the BI-method (R. P. Bhargava & T. Ishizuka, 1981). It is a direct, simple method…
Descriptors: Statistical Analysis, Statistical Data, Selection, Psychological Studies
Peer reviewed Peer reviewed
Gierl, Mark J.; Henderson, Diane; Jodoin, Michael; Klinger, Don – Journal of Experimental Education, 2001
Examined the influence of item parameter estimation errors across three item selection methods using the two- and three-parameter logistic item response theory (IRT) model. Tests created with the maximum no target and maximum target item selection procedures consistently overestimated the test information function. Tests created using the theta…
Descriptors: Estimation (Mathematics), Item Response Theory, Selection, Test Construction
Peer reviewed Peer reviewed
Marin-Martinez, Fulgencio; Sanchez-Meca, Julio – Journal of Experimental Education, 1998
Used Monte Carlo simulations to compare Type I error rates and the statistical power of three tests in detecting the effects of a dichotomous moderator variable in meta-analysis. The highest statistical power was shown by the Zhs test proposed by J. Hunter and F. Schmidt (1990). Discusses criteria for selecting among the three tests. (SLD)
Descriptors: Comparative Analysis, Criteria, Meta Analysis, Monte Carlo Methods
Peer reviewed Peer reviewed
Krishnan, K. S.; Clelland, R. C. – Journal of Experimental Education, 1973
This study's main purpose was to determine whether or not standard predictors of college success'' might perform more satisfactorily than usual if a 2-valued criterion based on dropouts was employed. (Author)
Descriptors: Admission (School), Admission Criteria, College Freshmen, Discriminant Analysis
Peer reviewed Peer reviewed
Penfield, Douglas A. – Journal of Experimental Education, 1994
Type I error rate and power for the t test, Wilcoxon-Mann-Whitney test, van der Waerden Normal Scores, and Welch-Aspin-Satterthwaite (W) test are compared for two simulated independent random samples from nonnormal distributions. Conditions under which the t test and W test are best to use are discussed. (SLD)
Descriptors: Monte Carlo Methods, Nonparametric Statistics, Power (Statistics), Sample Size