NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 43 results Save | Export
van Krimpen-Stoop, Edith M. L. A.; Meijer, Rob R. – 1999
Item scores that do not fit an assumed item response theory model may cause the latent trait value to be estimated inaccurately. Several person-fit statistics for detecting nonfitting score patterns for paper-and-pencil tests have been proposed. In the context of computerized adaptive tests (CAT), the use of person-fit analysis has hardly been…
Descriptors: Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics), Item Response Theory
Huang, Chi-Yu; Kalohn, John C.; Lin, Chuan-Ju; Spray, Judith – 2000
Item pools supporting computer-based tests are not always completely calibrated. Occasionally, only a small subset of the items in the pool may have actual calibrations, while the remainder of the items may only have classical item statistics, (e.g., "p"-values, point-biserial correlation coefficients, or biserial correlation…
Descriptors: Classification, Computer Assisted Testing, Estimation (Mathematics), Item Banks
Raiche, Gilles; Blais, Jean-Guy – 2002
In a computerized adaptive test (CAT), it would be desirable to obtain an acceptable precision of the proficiency level estimate using an optimal number of items. Decreasing the number of items is accompanied, however, by a certain degree of bias when the true proficiency level differs significantly from the a priori estimate. G. Raiche (2000) has…
Descriptors: Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics), Item Response Theory
Peer reviewed Peer reviewed
Folk, Valerie Greaud; Green, Bert F. – Applied Psychological Measurement, 1989
Some effects of using unidimensional item response theory (IRT) were examined when the assumption of unidimensionality was violated. Adaptive and nonadaptive tests were used. It appears that use of a unidimensional model can bias parameter estimation, adaptive item selection, and ability estimation for the two types of testing. (TJH)
Descriptors: Ability Identification, Adaptive Testing, Computer Assisted Testing, Computer Simulation
Bergstrom, Betty A.; Lunz, Mary E. – 1991
The equivalence of pencil and paper Rasch item calibrations when used in a computer adaptive test administration was explored in this study. Items (n=726) were precalibarted with the pencil and paper test administrations. A computer adaptive test was administered to 321 medical technology students using the pencil and paper precalibrations in the…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
van der Linden, Wim J. – 1999
A constrained computerized adaptive testing (CAT) algorithm is presented that automatically equates the number-correct scores on adaptive tests. The algorithm can be used to equate number-correct scores across different administrations of the same adaptive test as well as to an external reference test. The constraints are derived from a set of…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
Peer reviewed Peer reviewed
Sykes, Robert C.; Ito, Kyoko – Applied Psychological Measurement, 1997
Evaluated the equivalence of scores and one-parameter logistic model item difficulty estimates obtained from computer-based and paper-and-pencil forms of a licensure examination taken by 418 examinees. There was no effect of either order or mode of administration on the equivalences. (SLD)
Descriptors: Computer Assisted Testing, Estimation (Mathematics), Health Personnel, Item Response Theory
Peer reviewed Peer reviewed
van der Linden, Wim J. – Applied Psychological Measurement, 1999
Proposes a procedure for empirical initialization of the trait (theta) estimator in adaptive testing that is based on the statistical relation between theta and background variables known prior to test administration. Illustrates the procedure for an adaptive version of a test from the Dutch General Aptitude Battery. (SLD)
Descriptors: Adaptive Testing, Aptitude Tests, Bayesian Statistics, Computer Assisted Testing
Zhu, Renbang; Yu, Feng; Liu, Su – 2002
A computerized adaptive test (CAT) administration usually requires a large supply of items with accurately estimated psychometric properties, such as item response theory (IRT) parameter estimates, to ensure the precision of examinee ability estimation. However, an estimated IRT model of a given item in any given pool does not always correctly…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics)
Peer reviewed Peer reviewed
Hetter, Rebecca D.; And Others – Applied Psychological Measurement, 1994
Effects on computerized adaptive test score of using a paper-and-pencil (P&P) calibration to select items and estimate scores were compared with effects of using computer calibration. Results with 2,999 Navy recruits support the use of item parameters calibrated from either P&P or computer administrations. (SLD)
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Estimation (Mathematics)
Kim, Seock-Ho; Cohen, Allan S. – 1997
Applications of item response theory to practical testing problems including equating, differential item functioning, and computerized adaptive testing, require that item parameter estimates be placed onto a common metric. In this study, two methods for developing a common metric for the graded response model under item response theory were…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Equated Scores
Peer reviewed Peer reviewed
Samejima, Fumiko – Psychometrika, 1994
Using the constant information model, constant amounts of test information, and a finite interval of ability, simulated data were produced for 8 ability levels and 20 numbers of test items. Analyses suggest that it is desirable to consider modifying test information functions when they measure accuracy in ability estimation. (SLD)
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Computer Simulation
De Ayala, R. J. – 1995
This study extended item parameter recovery studies in item response theory to the nominal response model (NRM). The NRM may be used with computerized adaptive testing, testlets, demographic items, and items whose alternatives provide educational diagnostic information. Moreover, with the increasing popularity of performance-based assessment, the…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Educational Diagnosis
Samejima, Fumiko – 1990
A method is proposed that increases the accuracies of estimation of the operating characteristics of discrete item responses, especially when the true operating characteristic is represented by a steep curve, and also at the lower and upper ends of the ability distribution where the estimation tends to be inaccurate because of the smaller number…
Descriptors: Ability Identification, Adaptive Testing, Computer Assisted Testing, Equations (Mathematics)
Wang, Xiang-Bo; Harris, Vincent; Roussos, Louis – 2002
Multidimensionality is known to affect the accuracy of item parameter and ability estimations, which subsequently influences the computation of item characteristic curves (ICCs) and true scores. By judiciously combining sections of a Law School Admission Test (LSAT), 11 sections of varying degrees of uni- and multidimensional structures are used…
Descriptors: Ability, College Entrance Examinations, Computer Assisted Testing, Estimation (Mathematics)
Previous Page | Next Page ยป
Pages: 1  |  2  |  3