Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 1 |
| Since 2007 (last 20 years) | 1 |
Descriptor
| Computer Simulation | 9 |
| Test Items | 7 |
| Test Construction | 6 |
| Item Response Theory | 4 |
| Latent Trait Theory | 4 |
| Estimation (Mathematics) | 3 |
| Item Analysis | 3 |
| Item Banks | 3 |
| Sample Size | 3 |
| Test Bias | 3 |
| Error of Measurement | 2 |
| More ▼ | |
Source
| Applied Measurement in… | 2 |
| Educational Measurement:… | 1 |
| Educational and Psychological… | 1 |
| Journal of Educational… | 1 |
Author
| Hambleton, Ronald K. | 9 |
| Rogers, H. Jane | 3 |
| Jones, Russell W. | 2 |
| Jodoin, Michael G. | 1 |
| Rovinelli, Richard J. | 1 |
| Yoo, Hanwook | 1 |
| Zenisky, April | 1 |
Publication Type
| Journal Articles | 5 |
| Reports - Evaluative | 5 |
| Speeches/Meeting Papers | 5 |
| Reports - Research | 3 |
| Reports - Descriptive | 1 |
Education Level
Audience
| Researchers | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
| Graduate Management Admission… | 1 |
What Works Clearinghouse Rating
Yoo, Hanwook; Hambleton, Ronald K. – Educational Measurement: Issues and Practice, 2019
Item analysis is an integral part of operational test development and is typically conducted within two popular statistical frameworks: classical test theory (CTT) and item response theory (IRT). In this digital ITEMS module, Hanwook Yoo and Ronald K. Hambleton provide an accessible overview of operational item analysis approaches within these…
Descriptors: Item Analysis, Item Response Theory, Guidelines, Test Construction
Jodoin, Michael G.; Zenisky, April; Hambleton, Ronald K. – Applied Measurement in Education, 2006
Many credentialing agencies today are either administering their examinations by computer or are likely to be doing so in the coming years. Unfortunately, although several promising computer-based test designs are available, little is known about how well they function in examination settings. The goal of this study was to compare fixed-length…
Descriptors: Computers, Test Results, Psychometrics, Computer Simulation
Peer reviewedHambleton, Ronald K.; Jones, Russell W. – Applied Measurement in Education, 1994
The impact of capitalizing on chance in item selection on the accuracy of test information functions was studied through simulation, focusing on examinee sample size in item calibration and the ratio of item bank size to test length. (SLD)
Descriptors: Computer Simulation, Estimation (Mathematics), Item Banks, Item Response Theory
Peer reviewedHambleton, Ronald K.; And Others – Journal of Educational Measurement, 1993
Item parameter estimation errors in test development are highlighted. The problem is illustrated with several simulated data sets, and a conservative solution is offered for addressing the problem in item response theory test development practice. Steps that reduce the problem of capitalizing on chance in item selections are suggested. (SLD)
Descriptors: Computer Simulation, Error of Measurement, Estimation (Mathematics), Item Banks
Hambleton, Ronald K.; Jones, Russell W. – 1993
Errors in item parameter estimates have a negative impact on the accuracy of item and test information functions. The estimation errors may be random, but because items with higher levels of discriminating power are more likely to be selected for a test, and these items are most apt to contain positive errors, the result is that item information…
Descriptors: Computer Simulation, Error of Measurement, Estimation (Mathematics), Item Banks
Hambleton, Ronald K.; Rovinelli, Richard J. – 1986
Four methods for determining the dimensionality of a set of test items were compared: (1) linear factor analysis; (2) residual analysis; (3) nonlinear factor analysis; and (4) Bejar's method. Five artificial test data sets (for 40 items and 1500 examinees) were generated, consistent with the three-parameter logistic model and the assumption of…
Descriptors: Comparative Analysis, Computer Simulation, Correlation, Factor Analysis
Rogers, H. Jane; Hambleton, Ronald K. – 1987
Although item bias statistics are widely recommended for use in test development and test analysis work, problems arise in their interpretation. The purpose of the present research was to evaluate the validity of logistic test models and computer simulation methods for providing a frame of reference for item bias statistic interpretations.…
Descriptors: Computer Simulation, Evaluation Methods, Item Analysis, Latent Trait Theory
Rogers, H. Jane; Hambleton, Ronald K. – 1987
Though item bias statistics are widely recommended for use in test development and analysis, problems arise in their interpretation. This research evaluates logistic test models and computer simulation methods for providing a frame of reference for interpreting item bias statistics. Specifically, the intent was to produce simulated sampling…
Descriptors: Computer Simulation, Cutting Scores, Grade 9, Latent Trait Theory
Peer reviewedRogers, H. Jane; Hambleton, Ronald K. – Educational and Psychological Measurement, 1989
The validity of logistic test models and computer simulation methods for generating sampling distributions of item bias statistics was evaluated under the hypothesis of no item bias. Test data from 937 ninth-grade students were used to develop 7 steps for applying computer-simulated baseline statistics in test development. (SLD)
Descriptors: Computer Simulation, Educational Research, Evaluation Methods, Grade 9

Direct link
