NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 9 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Yoo, Hanwook; Hambleton, Ronald K. – Educational Measurement: Issues and Practice, 2019
Item analysis is an integral part of operational test development and is typically conducted within two popular statistical frameworks: classical test theory (CTT) and item response theory (IRT). In this digital ITEMS module, Hanwook Yoo and Ronald K. Hambleton provide an accessible overview of operational item analysis approaches within these…
Descriptors: Item Analysis, Item Response Theory, Guidelines, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Jodoin, Michael G.; Zenisky, April; Hambleton, Ronald K. – Applied Measurement in Education, 2006
Many credentialing agencies today are either administering their examinations by computer or are likely to be doing so in the coming years. Unfortunately, although several promising computer-based test designs are available, little is known about how well they function in examination settings. The goal of this study was to compare fixed-length…
Descriptors: Computers, Test Results, Psychometrics, Computer Simulation
Peer reviewed Peer reviewed
Hambleton, Ronald K.; Jones, Russell W. – Applied Measurement in Education, 1994
The impact of capitalizing on chance in item selection on the accuracy of test information functions was studied through simulation, focusing on examinee sample size in item calibration and the ratio of item bank size to test length. (SLD)
Descriptors: Computer Simulation, Estimation (Mathematics), Item Banks, Item Response Theory
Peer reviewed Peer reviewed
Hambleton, Ronald K.; And Others – Journal of Educational Measurement, 1993
Item parameter estimation errors in test development are highlighted. The problem is illustrated with several simulated data sets, and a conservative solution is offered for addressing the problem in item response theory test development practice. Steps that reduce the problem of capitalizing on chance in item selections are suggested. (SLD)
Descriptors: Computer Simulation, Error of Measurement, Estimation (Mathematics), Item Banks
Hambleton, Ronald K.; Jones, Russell W. – 1993
Errors in item parameter estimates have a negative impact on the accuracy of item and test information functions. The estimation errors may be random, but because items with higher levels of discriminating power are more likely to be selected for a test, and these items are most apt to contain positive errors, the result is that item information…
Descriptors: Computer Simulation, Error of Measurement, Estimation (Mathematics), Item Banks
Hambleton, Ronald K.; Rovinelli, Richard J. – 1986
Four methods for determining the dimensionality of a set of test items were compared: (1) linear factor analysis; (2) residual analysis; (3) nonlinear factor analysis; and (4) Bejar's method. Five artificial test data sets (for 40 items and 1500 examinees) were generated, consistent with the three-parameter logistic model and the assumption of…
Descriptors: Comparative Analysis, Computer Simulation, Correlation, Factor Analysis
Rogers, H. Jane; Hambleton, Ronald K. – 1987
Although item bias statistics are widely recommended for use in test development and test analysis work, problems arise in their interpretation. The purpose of the present research was to evaluate the validity of logistic test models and computer simulation methods for providing a frame of reference for item bias statistic interpretations.…
Descriptors: Computer Simulation, Evaluation Methods, Item Analysis, Latent Trait Theory
Rogers, H. Jane; Hambleton, Ronald K. – 1987
Though item bias statistics are widely recommended for use in test development and analysis, problems arise in their interpretation. This research evaluates logistic test models and computer simulation methods for providing a frame of reference for interpreting item bias statistics. Specifically, the intent was to produce simulated sampling…
Descriptors: Computer Simulation, Cutting Scores, Grade 9, Latent Trait Theory
Peer reviewed Peer reviewed
Rogers, H. Jane; Hambleton, Ronald K. – Educational and Psychological Measurement, 1989
The validity of logistic test models and computer simulation methods for generating sampling distributions of item bias statistics was evaluated under the hypothesis of no item bias. Test data from 937 ninth-grade students were used to develop 7 steps for applying computer-simulated baseline statistics in test development. (SLD)
Descriptors: Computer Simulation, Educational Research, Evaluation Methods, Grade 9