Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 1 |
Descriptor
Selection | 9 |
Monte Carlo Methods | 4 |
Comparative Analysis | 3 |
Power (Statistics) | 3 |
Discriminant Analysis | 2 |
Sampling | 2 |
Statistical Analysis | 2 |
Admission (School) | 1 |
Admission Criteria | 1 |
Children | 1 |
Classification | 1 |
More ▼ |
Source
Journal of Experimental… | 9 |
Author
Clelland, R. C. | 1 |
Gierl, Mark J. | 1 |
Henderson, Diane | 1 |
Jodoin, Michael | 1 |
Keselman, Harvey | 1 |
Klinger, Don | 1 |
Krishnan, K. S. | 1 |
Kromrey, Jeffrey D. | 1 |
La Rocca, Michela A. | 1 |
Leung, Shing On | 1 |
Marin-Martinez, Fulgencio | 1 |
More ▼ |
Publication Type
Journal Articles | 8 |
Reports - Evaluative | 7 |
Reports - Research | 1 |
Education Level
Elementary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Early Childhood Longitudinal… | 1 |
What Works Clearinghouse Rating
Park, Sunyoung; Natasha Beretvas, S. – Journal of Experimental Education, 2021
When selecting a multilevel model to fit to a dataset, it is important to choose both a model that best matches characteristics of the data's structure, but also to include the appropriate fixed and random effects parameters. For example, when researchers analyze clustered data (e.g., students nested within schools), the multilevel model can be…
Descriptors: Hierarchical Linear Modeling, Statistical Significance, Multivariate Analysis, Monte Carlo Methods

Olejnik, Stephen; Mills, Jamie; Keselman, Harvey – Journal of Experimental Education, 2000
Evaluated the use of Mallow's C(p) and Wherry's adjusted R squared (R. Wherry, 1931) statistics to select a final model from a pool of model solutions using computer generated data. Neither statistic identified the underlying regression model any better than, and usually less well than, the stepwise selection method, which itself was poor for…
Descriptors: Computer Simulation, Models, Regression (Statistics), Selection

Meshbane, Alice; Morris, John D. – Journal of Experimental Education, 1995
A method for comparing the cross-validated classification accuracies of linear and quadratic classification rules is presented under varying data conditions for the "k"-group classification problem. Separate-group and total-group proportions of correct classifications can be compared for the two rules, as is illustrated. (Author/SLD)
Descriptors: Classification, Comparative Analysis, Discriminant Analysis, Equations (Mathematics)

Kromrey, Jeffrey D.; La Rocca, Michela A. – Journal of Experimental Education, 1995
The Type I error rates and statistical power of nine selected multiple comparison procedures were compared in a Monte Carlo study. The Peretz, Ryan, and Fisher-Hayter tests were the most powerful, and differences among these procedures were consistently small. Choosing among these procedures might be based on their calculational complexity. (SLD)
Descriptors: Comparative Analysis, Computation, Monte Carlo Methods, Power (Statistics)

Leung, Shing On; Sachs, John – Journal of Experimental Education, 2005
Quite often in data reduction, it is more meaningful and economical to select a subset of variables instead of reducing the dimensionality of the variable space with principal components analysis. The authors present a neglected method for variable selection called the BI-method (R. P. Bhargava & T. Ishizuka, 1981). It is a direct, simple method…
Descriptors: Statistical Analysis, Statistical Data, Selection, Psychological Studies

Gierl, Mark J.; Henderson, Diane; Jodoin, Michael; Klinger, Don – Journal of Experimental Education, 2001
Examined the influence of item parameter estimation errors across three item selection methods using the two- and three-parameter logistic item response theory (IRT) model. Tests created with the maximum no target and maximum target item selection procedures consistently overestimated the test information function. Tests created using the theta…
Descriptors: Estimation (Mathematics), Item Response Theory, Selection, Test Construction

Marin-Martinez, Fulgencio; Sanchez-Meca, Julio – Journal of Experimental Education, 1998
Used Monte Carlo simulations to compare Type I error rates and the statistical power of three tests in detecting the effects of a dichotomous moderator variable in meta-analysis. The highest statistical power was shown by the Zhs test proposed by J. Hunter and F. Schmidt (1990). Discusses criteria for selecting among the three tests. (SLD)
Descriptors: Comparative Analysis, Criteria, Meta Analysis, Monte Carlo Methods

Krishnan, K. S.; Clelland, R. C. – Journal of Experimental Education, 1973
This study's main purpose was to determine whether or not standard predictors of college success'' might perform more satisfactorily than usual if a 2-valued criterion based on dropouts was employed. (Author)
Descriptors: Admission (School), Admission Criteria, College Freshmen, Discriminant Analysis

Penfield, Douglas A. – Journal of Experimental Education, 1994
Type I error rate and power for the t test, Wilcoxon-Mann-Whitney test, van der Waerden Normal Scores, and Welch-Aspin-Satterthwaite (W) test are compared for two simulated independent random samples from nonnormal distributions. Conditions under which the t test and W test are best to use are discussed. (SLD)
Descriptors: Monte Carlo Methods, Nonparametric Statistics, Power (Statistics), Sample Size