Descriptor
Adaptive Testing | 16 |
Difficulty Level | 16 |
Estimation (Mathematics) | 16 |
Computer Assisted Testing | 15 |
Test Items | 11 |
Item Response Theory | 8 |
Ability | 5 |
Simulation | 5 |
Test Construction | 5 |
Computer Simulation | 4 |
Item Banks | 4 |
More ▼ |
Author
De Ayala, R. J. | 2 |
Ito, Kyoko | 2 |
Reese, Lynda M. | 2 |
Schnipke, Deborah L. | 2 |
Sykes, Robert C. | 2 |
Ackerman, Terry A. | 1 |
Bergstrom, Betty | 1 |
Cohen, Allan S. | 1 |
Dodd, Barbara G. | 1 |
Gershon, Richard | 1 |
Gershon, Richard C. | 1 |
More ▼ |
Publication Type
Speeches/Meeting Papers | 10 |
Reports - Evaluative | 9 |
Reports - Research | 6 |
Journal Articles | 4 |
Information Analyses | 1 |
Education Level
Audience
Researchers | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
ACT Assessment | 1 |
Law School Admission Test | 1 |
What Works Clearinghouse Rating
Hambleton, Ronald K.; Sireci, Stephen G.; Swaminathan, H.; Xing, Dehui; Rizavi, Saba – 2003
The purposes of this research study were to develop and field test anchor-based judgmental methods for enabling test specialists to estimate item difficulty statistics. The study consisted of three related field tests. In each, researchers worked with six Law School Admission Test (LSAT) test specialists and one or more of the LSAT subtests. The…
Descriptors: Adaptive Testing, College Entrance Examinations, Computer Assisted Testing, Difficulty Level
Reese, Lynda M.; Schnipke, Deborah L. – 1999
A two-stage design provides a way of roughly adapting item difficulty to test-taker ability. All test takers take a parallel stage-one test, and based on their scores, they are routed to tests of different difficulty levels in the second stage. This design provides some of the benefits of standard computer adaptive testing (CAT), such as increased…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Difficulty Level
Gershon, Richard; Bergstrom, Betty – 1995
When examinees are allowed to review responses on an adaptive test, can they "cheat" the adaptive algorithm in order to take an easier test and improve their performance? Theoretically, deliberately answering items incorrectly will lower the examinee ability estimate and easy test items will be administered. If review is then allowed,…
Descriptors: Adaptive Testing, Algorithms, Cheating, Computer Assisted Testing

Rocklin, Thomas R. – Applied Measurement in Education, 1994
Effects of self-adapted testing (SAT), in which examinees choose the difficulty of items themselves, on ability estimates, precision, and efficiency, mechanisms of SAT effects, and examinee reactions to SAT are reviewed. SAT, which is less efficient than computer-adapted testing, is more efficient than fixed-item testing. (SLD)
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Difficulty Level
Schnipke, Deborah L.; Reese, Lynda M. – 1997
Two-stage and multistage test designs provide a way of roughly adapting item difficulty to test-taker ability. All test takers take a parallel stage-one test, and, based on their scores, they are routed to tests of different difficulty levels in subsequent stages. These designs provide some of the benefits of standard computerized adaptive testing…
Descriptors: Ability, Adaptive Testing, Algorithms, Comparative Analysis
De Ayala, R. J. – 1990
The effect of dimensionality on an adaptive test's ability estimation was examined. Two-dimensional data sets, which differed from one another in the interdimensional ability association, the correlation among the difficulty parameters, and whether the item discriminations were or were not confounded with item difficulty, were generated for 1,600…
Descriptors: Ability Identification, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing
Kim, Seock-Ho; Cohen, Allan S. – 1996
Applications of item response theory to practical testing problems including equating, differential item functioning, and computerized adaptive testing, require that item parameter estimates be placed onto a common metric. In this study, three methods for developing a common metric under item response theory are compared: (1) linking separate…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Difficulty Level
Gershon, Richard C.; And Others – 1994
A 1992 study by R. Gershon found discrepancies when comparing the theoretical Rasch item characteristic curve with the average empirical curve for 1,304 vocabulary items administered to 7,711 students. When person-item mismatches were deleted (for any person-item interaction where the ability of the person was much higher or much lower than the…
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Elementary Education
Sykes, Robert C.; Ito, Kyoko – 1995
Whether the presence of bidimensionality has any effect on the adaptive recalibration of test items was studied through live-data simulation of computer adaptive testing (CAT) forms. The source data were examinee responses to the 298 scored multiple choice items of a licensure examination in a health care profession. Three 75-item part-forms,…
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Estimation (Mathematics)

Dodd, Barbara G.; And Others – Educational and Psychological Measurement, 1993
Effects of the following variables on performance of computerized adaptive testing (CAT) procedures for the partial credit model (PCM) were studied: (1) stopping rule for terminating CAT; (2) item pool size; and (3) distribution of item difficulties. Implications of findings for CAT systems based on the PCM are discussed. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Simulation, Difficulty Level

De Ayala, R. J. – Educational and Psychological Measurement, 1992
Effects of dimensionality on ability estimation of an adaptive test were examined using generated data in Bayesian computerized adaptive testing (CAT) simulations. Generally, increasing interdimensional difficulty association produced a slight decrease in test length and an increase in accuracy of ability estimation as assessed by root mean square…
Descriptors: Adaptive Testing, Bayesian Statistics, Computer Assisted Testing, Computer Simulation
Ito, Kyoko; Sykes, Robert C. – 1994
Responses to previously calibrated items administered in a computerized adaptive testing (CAT) mode may be used to recalibrate the items. This live-data simulation study investigated the possibility, and limitations, of on-line adaptive recalibration of precalibrated items. Responses to items of a Rasch-based paper-and-pencil licensure examination…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Difficulty Level
Ackerman, Terry A. – 1987
The purpose of this study was to investigate the effect of using multidimensional items in a computer adaptive test (CAT) setting which assumes a unidimensional item response theory (IRT) framework. Previous research has suggested that the composite of multidimensional abilities being estimated by a unidimensional IRT model is not constant…
Descriptors: Adaptive Testing, College Entrance Examinations, Computer Assisted Testing, Computer Simulation
Kirisci, Levent; Hsu, Tse-Chi – 1992
A predictive adaptive testing (PAT) strategy was developed based on statistical predictive analysis, and its feasibility was studied by comparing PAT performance to those of the Flexilevel, Bayesian modal, and expected a posteriori (EAP) strategies in a simulated environment. The proposed adaptive test is based on the idea of using item difficulty…
Descriptors: Adaptive Testing, Bayesian Statistics, Comparative Analysis, Computer Assisted Testing
Parshall, Cynthia G.; And Others – 1994
Response latency information has been used in the past to provide information for consideration along with response accuracy when obtaining trait level estimates, and more recently, to flag unusual response patterns, to establish appropriate time-to-test limits (Reese, 1993), and to determine predictors of the amount of time needed to administer a…
Descriptors: Ability, Adaptive Testing, Age Differences, Classification
Previous Page | Next Page ยป
Pages: 1 | 2