NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Seonghoon; Feldt, Leonard S. – Journal of Educational Measurement, 2008
This article extends the Bonett (2003a) approach to testing the equality of alpha coefficients from two independent samples to the case of m [greater than or equal] 2 independent samples. The extended Fisher-Bonett test and its competitor, the Hakstian-Whalen (1976) test, are illustrated with numerical examples of both hypothesis testing and power…
Descriptors: Tests, Comparative Analysis, Hypothesis Testing, Error of Measurement
Peer reviewed Peer reviewed
Parshall, Cynthia G.; Miller, Timothy R. – Journal of Educational Measurement, 1995
Exact testing was evaluated as a method for conducting Mantel-Haenszel differential item functioning (DIF) analyses with relatively small samples. A series of computer simulations found that the asymptotic Mantel-Haenszel and the exact method yielded very similar results across sample size, levels of DIF, and data sets. (SLD)
Descriptors: Comparative Analysis, Computer Simulation, Identification, Item Bias
Peer reviewed Peer reviewed
Tate, Richard L. – Journal of Educational Measurement, 1995
Robustness of the school-level item response theoretic (IRT) model to violations of distributional assumptions was studied in a computer simulation. In situations where school-level precision might be acceptable for real school comparisons, expected a posteriori estimates of school ability were robust over a range of violations and conditions.…
Descriptors: Comparative Analysis, Computer Simulation, Estimation (Mathematics), Item Response Theory
Peer reviewed Peer reviewed
Swaminathan, Hariharan; Rogers, H. Jane – Journal of Educational Measurement, 1990
A logistic regression model for characterizing differential item functioning (DIF) between two groups is presented. A distinction is drawn between uniform and nonuniform DIF in terms of model parameters. A statistic for testing the hypotheses of no DIF is developed, and simulation studies compare it with the Mantel-Haenszel procedure. (Author/TJH)
Descriptors: Comparative Analysis, Computer Simulation, Equations (Mathematics), Estimation (Mathematics)
Peer reviewed Peer reviewed
Frary, Robert B. – Journal of Educational Measurement, 1985
Responses to a sample test were simulated for examinees under free-response and multiple-choice formats. Test score sets were correlated with randomly generated sets of unit-normal measures. The extent of superiority of free response tests was sufficiently small so that other considerations might justifiably dictate format choice. (Author/DWH)
Descriptors: Comparative Analysis, Computer Simulation, Essay Tests, Guessing (Tests)
Peer reviewed Peer reviewed
De Ayala, R. J.; And Others – Journal of Educational Measurement, 1990
F. M. Lord's flexilevel, computerized adaptive testing (CAT) procedure was compared to an item-response theory-based CAT procedure that uses Bayesian ability estimation with various standard errors of estimates used for terminating the test. Ability estimates of flexilevel CATs were as accurate as were those of Bayesian CATs. (TJH)
Descriptors: Ability Identification, Adaptive Testing, Bayesian Statistics, Comparative Analysis
Peer reviewed Peer reviewed
Plake, Barbara S.; Kane, Michael T. – Journal of Educational Measurement, 1991
Several methods for determining a passing score on an examination from individual raters' estimates of minimal pass levels were compared through simulation. The methods used differed in the weighting estimates for each item received in the aggregation process. Reasons why the simplest procedure is most preferred are discussed. (SLD)
Descriptors: Comparative Analysis, Computer Simulation, Cutting Scores, Estimation (Mathematics)
Peer reviewed Peer reviewed
Vispoel, Walter P.; And Others – Journal of Educational Measurement, 1997
Efficiency, precision, and concurrent validity of results from adaptive and fixed-item music listening tests were studied using: (1) 2,200 simulated examinees; (2) 204 live examinees; and (3) 172 live examinees. Results support the usefulness of adaptive tests for measuring skills that require aurally produced items. (SLD)
Descriptors: Adaptive Testing, Adults, College Students, Comparative Analysis
Peer reviewed Peer reviewed
Hirsch, Thomas M. – Journal of Educational Measurement, 1989
Equatings were performed on both simulated and real data sets using common-examinee design and two abilities for each examinee. Results indicate that effective equating, as measured by comparability of true scores, is possible with the techniques used in this study. However, the stability of the ability estimates proved unsatisfactory. (TJH)
Descriptors: Academic Ability, College Students, Comparative Analysis, Computer Assisted Testing
Peer reviewed Peer reviewed
McKinley, Robert L. – Journal of Educational Measurement, 1988
Six procedures for combining sets of item response theory (IRT) item parameter estimates from different samples were evaluated using real and simulated response data. Results support use of covariance matrix-weighted averaging and a procedure using sample-size-weighted averaging of estimated item characteristic curves at the center of the ability…
Descriptors: College Entrance Examinations, Comparative Analysis, Computer Simulation, Estimation (Mathematics)