Publication Date
| In 2026 | 0 |
| Since 2025 | 2 |
| Since 2022 (last 5 years) | 10 |
| Since 2017 (last 10 years) | 17 |
| Since 2007 (last 20 years) | 38 |
Descriptor
Source
Author
| Wise, Steven L. | 8 |
| Plake, Barbara S. | 3 |
| Reckase, Mark D. | 3 |
| Stocking, Martha L. | 3 |
| Weiss, David J. | 3 |
| Andrich, David | 2 |
| Bergstrom, Betty | 2 |
| De Ayala, R. J. | 2 |
| Finney, Sara J. | 2 |
| Gershon, Richard C. | 2 |
| Hansen, Duncan N. | 2 |
| More ▼ | |
Publication Type
Education Level
| Elementary Secondary Education | 6 |
| Elementary Education | 4 |
| High Schools | 4 |
| Higher Education | 4 |
| Secondary Education | 3 |
| Grade 4 | 2 |
| Intermediate Grades | 2 |
| Early Childhood Education | 1 |
| Grade 1 | 1 |
| Grade 11 | 1 |
| Grade 12 | 1 |
| More ▼ | |
Audience
| Researchers | 5 |
| Practitioners | 1 |
| Teachers | 1 |
Location
| California | 2 |
| Turkey | 2 |
| Australia | 1 |
| Florida | 1 |
| Greece | 1 |
| Hungary | 1 |
| Idaho | 1 |
| Indonesia | 1 |
| Iran | 1 |
| Nevada | 1 |
| New York | 1 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Peer reviewedRocklin, Thomas R. – Applied Measurement in Education, 1994
Effects of self-adapted testing (SAT), in which examinees choose the difficulty of items themselves, on ability estimates, precision, and efficiency, mechanisms of SAT effects, and examinee reactions to SAT are reviewed. SAT, which is less efficient than computer-adapted testing, is more efficient than fixed-item testing. (SLD)
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Difficulty Level
Schnipke, Deborah L.; Reese, Lynda M. – 1997
Two-stage and multistage test designs provide a way of roughly adapting item difficulty to test-taker ability. All test takers take a parallel stage-one test, and, based on their scores, they are routed to tests of different difficulty levels in subsequent stages. These designs provide some of the benefits of standard computerized adaptive testing…
Descriptors: Ability, Adaptive Testing, Algorithms, Comparative Analysis
PDF pending restorationPlake, Barbara S.; And Others – 1994
In self-adapted testing (SAT), examinees select the difficulty level of items administered. This study investigated three variations of prior information provided when taking an SAT: (1) no information (examinees selected item difficulty levels without prior information); (2) view (examinees inspected a typical item from each difficulty level…
Descriptors: Adaptive Testing, College Students, Computer Assisted Testing, Difficulty Level
De Ayala, R. J. – 1990
The effect of dimensionality on an adaptive test's ability estimation was examined. Two-dimensional data sets, which differed from one another in the interdimensional ability association, the correlation among the difficulty parameters, and whether the item discriminations were or were not confounded with item difficulty, were generated for 1,600…
Descriptors: Ability Identification, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing
Harris, Dickie A.; Penell, Roger J. – 1977
This study used a series of simulations to answer questions about the efficacy of adaptive testing raised by empirical studies. The first study showed that for reasonable high entry points, parameters estimated from paper-and-pencil test protocols cross-validated remarkably well to groups actually tested at a computer terminal. This suggested that…
Descriptors: Adaptive Testing, Computer Assisted Testing, Cost Effectiveness, Difficulty Level
Peer reviewedPlake, Barbara S.; And Others – Educational and Psychological Measurement, 1995
No significant differences in performance on a self-adapted test or anxiety were found for college students (n=218) taking a self-adapted test who selected item difficulty without any prior information, inspected an item before selecting, or answered a typical item and received performance feedback. (SLD)
Descriptors: Achievement, Adaptive Testing, College Students, Computer Assisted Testing
Peer reviewedPonsoda, Vicente; Olea, Julio; Rodriguez, Maria Soledad; Revuelta, Javier – Applied Measurement in Education, 1999
Compared easy and difficult versions of self-adapted tests (SAT) and computerized adapted tests. No significant differences were found among the tests for estimated ability or posttest state anxiety in studies with 187 Spanish high school students, although other significant differences were found. Discusses implications for interpreting test…
Descriptors: Ability, Adaptive Testing, Comparative Analysis, Computer Assisted Testing
Wang, Xiang-bo; And Others – 1993
An increasingly popular test format allows examinees to choose the items they will answer from among a larger set. When examinee choice is allowed fairness requires that the different test forms thus formed be equated for their possible differential difficulty. For this equating to be possible it is necessary to know how well examinees would have…
Descriptors: Adaptive Testing, Advanced Placement, Difficulty Level, Equated Scores
Kim, Seock-Ho; Cohen, Allan S. – 1996
Applications of item response theory to practical testing problems including equating, differential item functioning, and computerized adaptive testing, require that item parameter estimates be placed onto a common metric. In this study, three methods for developing a common metric under item response theory are compared: (1) linking separate…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Difficulty Level
Gershon, Richard C.; And Others – 1994
A 1992 study by R. Gershon found discrepancies when comparing the theoretical Rasch item characteristic curve with the average empirical curve for 1,304 vocabulary items administered to 7,711 students. When person-item mismatches were deleted (for any person-item interaction where the ability of the person was much higher or much lower than the…
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Elementary Education
Peer reviewedLord, Frederic M. – Educational and Psychological Measurement, 1971
Descriptors: Ability, Adaptive Testing, Computer Oriented Programs, Difficulty Level
Peer reviewedStyles, Irene; Andrich, David – Educational and Psychological Measurement, 1993
This paper describes the use of the Rasch model to help implement computerized administration of the standard and advanced forms of Raven's Progressive Matrices (RPM), to compare relative item difficulties, and to convert scores between the standard and advanced forms. The sample consisted of 95 girls and 95 boys in Australia. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Elementary Education
Sykes, Robert C.; Ito, Kyoko – 1995
Whether the presence of bidimensionality has any effect on the adaptive recalibration of test items was studied through live-data simulation of computer adaptive testing (CAT) forms. The source data were examinee responses to the 298 scored multiple choice items of a licensure examination in a health care profession. Three 75-item part-forms,…
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Estimation (Mathematics)
Veerkamp, Wim J. J.; Berger, Martijn P. F. – 1994
Items with the highest discrimination parameter values in a logistic item response theory (IRT) model do not necessarily give maximum information. This paper shows which discrimination parameter values (as a function of the guessing parameter and the distance between person ability and item difficulty) give maximum information for the…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
Roos, Linda L.; Wise, Steven L.; Finney, Sara J. – 1998
Previous studies have shown that, when administered a self-adapted test, a few examinees will choose item difficulty levels that are not well-matched to their proficiencies, resulting in high standard errors of proficiency estimation. This study investigated whether the previously observed effects of a self-adapted test--lower anxiety and higher…
Descriptors: Adaptive Testing, College Students, Comparative Analysis, Computer Assisted Testing


