Descriptor
Algorithms | 6 |
Difficulty Level | 6 |
Computer Assisted Testing | 5 |
Adaptive Testing | 4 |
Item Banks | 3 |
Simulation | 3 |
Test Items | 3 |
Estimation (Mathematics) | 2 |
Higher Education | 2 |
Test Construction | 2 |
Undergraduate Students | 2 |
More ▼ |
Source
Author
Bergstrom, Betty | 1 |
Dimitrov, Dimiter M. | 1 |
Gershon, Richard | 1 |
Lau, C. Allen | 1 |
Linacre, John Michael | 1 |
Reese, Lynda M. | 1 |
Roos, Linda L. | 1 |
Schnipke, Deborah L. | 1 |
Wang, Tianyou | 1 |
Publication Type
Speeches/Meeting Papers | 6 |
Reports - Evaluative | 3 |
Reports - Research | 2 |
Guides - Non-Classroom | 1 |
Education Level
Audience
Practitioners | 1 |
Teachers | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of… | 1 |
What Works Clearinghouse Rating
Lau, C. Allen; Wang, Tianyou – 1999
A study was conducted to extend the sequential probability ratio testing (SPRT) procedure with the polytomous model under some practical constraints in computerized classification testing (CCT), such as methods to control item exposure rate, and to study the effects of other variables, including item information algorithms, test difficulties, item…
Descriptors: Algorithms, Computer Assisted Testing, Difficulty Level, Item Banks
Linacre, John Michael – 1988
Computer-adaptive testing (CAT) allows improved security, greater scoring accuracy, shorter testing periods, quicker availability of results, and reduced guessing and other undesirable test behavior. Simple approaches can be applied by the classroom teacher, or other content specialist, who possesses simple computer equipment and elementary…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Cutting Scores
Gershon, Richard; Bergstrom, Betty – 1995
When examinees are allowed to review responses on an adaptive test, can they "cheat" the adaptive algorithm in order to take an easier test and improve their performance? Theoretically, deliberately answering items incorrectly will lower the examinee ability estimate and easy test items will be administered. If review is then allowed,…
Descriptors: Adaptive Testing, Algorithms, Cheating, Computer Assisted Testing
Schnipke, Deborah L.; Reese, Lynda M. – 1997
Two-stage and multistage test designs provide a way of roughly adapting item difficulty to test-taker ability. All test takers take a parallel stage-one test, and, based on their scores, they are routed to tests of different difficulty levels in subsequent stages. These designs provide some of the benefits of standard computerized adaptive testing…
Descriptors: Ability, Adaptive Testing, Algorithms, Comparative Analysis
Dimitrov, Dimiter M. – 1994
An approach is described that reveals the hierarchical test structure (HTS) based on the cognitive demands of the test items, and conducts a linear trait modeling by using the HST elements as item difficulty components. This approach, referred to as the Hierarchical Latent Trait Approach (HLTA), employs an algorithm that allows all test items to…
Descriptors: Algorithms, Cognitive Processes, Difficulty Level, Higher Education
Roos, Linda L.; And Others – 1992
Computerized adaptive (CA) testing uses an algorithm to match examinee ability to item difficulty, while self-adapted (SA) testing allows the examinee to choose the difficulty of his or her items. Research comparing SA and CA testing has shown that examinees experience lower anxiety and improved performance with SA testing. All previous research…
Descriptors: Ability Identification, Adaptive Testing, Algebra, Algorithms