NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)2
Since 2006 (last 20 years)3
Education Level
Location
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 44 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Chun Wang; Ping Chen; Shengyu Jiang – Journal of Educational Measurement, 2020
Many large-scale educational surveys have moved from linear form design to multistage testing (MST) design. One advantage of MST is that it can provide more accurate latent trait [theta] estimates using fewer items than required by linear tests. However, MST generates incomplete response data by design; hence, questions remain as to how to…
Descriptors: Test Construction, Test Items, Adaptive Testing, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Thissen, David – Journal of Educational and Behavioral Statistics, 2016
David Thissen, a professor in the Department of Psychology and Neuroscience, Quantitative Program at the University of North Carolina, has consulted and served on technical advisory committees for assessment programs that use item response theory (IRT) over the past couple decades. He has come to the conclusion that there are usually two purposes…
Descriptors: Item Response Theory, Test Construction, Testing Problems, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Kettler, Ryan J. – Review of Research in Education, 2015
This chapter introduces theory that undergirds the role of testing adaptations in assessment, provides examples of item modifications and testing accommodations, reviews research relevant to each, and introduces a new paradigm that incorporates opportunity to learn (OTL), academic enablers, testing adaptations, and inferences that can be made from…
Descriptors: Meta Analysis, Literature Reviews, Testing, Testing Accommodations
Peer reviewed Peer reviewed
Roos, Linda L.; And Others – Educational and Psychological Measurement, 1996
This article describes Minnesota Computerized Adaptive Testing Language program code for using the MicroCAT 3.5 testing software to administer several types of self-adapted tests. Code is provided for: a basic self-adapted test; a self-adapted version of an adaptive mastery test; and a restricted self-adapted test. (Author/SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Mastery Tests, Programming
Parshall, Cynthia G.; Davey, Tim; Nering, Mike L. – 1998
When items are selected during a computerized adaptive test (CAT) solely with regard to their measurement properties, it is commonly found that certain items are administered to nearly every examinee, and that a small number of the available items will account for a large proportion of the item administrations. This presents a clear security risk…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Efficiency
Peer reviewed Peer reviewed
Direct linkDirect link
Kolen, Michael J. – Educational Assessment, 1999
Develops a conceptual framework that addresses score comparability for performance assessments, adaptive tests, paper-and-pencil tests, and alternate item pools for computerized tests. Outlines testing situation aspects that might threaten score comparability and describes procedures for evaluating the degree of score comparability. Suggests ways…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Performance Based Assessment
Stocking, Martha L.; Lewis, Charles – 1995
In the periodic testing environment associated with conventional paper-and-pencil tests, the frequency with which items are seen by test-takers is tightly controlled in advance of testing by policies that regulate both the reuse of test forms and the frequency with which candidates may take the test. In the continuous testing environment…
Descriptors: Adaptive Testing, Computer Assisted Testing, Selection, Test Construction
Davey, Tim; Parshall, Cynthia G. – 1995
Although computerized adaptive tests acquire their efficiency by successively selecting items that provide optimal measurement at each examinee's estimated level of ability, operational testing programs will typically consider additional factors in item selection. In practice, items are generally selected with regard to at least three, often…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
Peer reviewed Peer reviewed
Wainer, Howard; Kiely, Gerard L. – Journal of Educational Measurement, 1987
The testlet, a bundle of test items, alleviates some problems associated with computerized adaptive testing: context effects, lack of robustness, and item difficulty ordering. While testlets may be linear or hierarchical, the most useful ones are four-level hierarchical units, containing 15 items and partitioning examinees into 16 classes. (GDC)
Descriptors: Adaptive Testing, Computer Assisted Testing, Context Effect, Item Banks
Drasgow, Fritz, Ed.; Olson-Buchanan, Julie B., Ed. – 1999
Chapters in this book present the challenges and dilemmas faced by researchers as they created new computerized assessments, focusing on issues addressed in developing, scoring, and administering the assessments. Chapters are: (1) "Beyond Bells and Whistles; An Introduction to Computerized Assessment" (Julie B. Olson-Buchanan and Fritz Drasgow);…
Descriptors: Adaptive Testing, Computer Assisted Testing, Elementary Secondary Education, Scoring
Stocking, Martha L. – 1994
As adaptive testing moves toward operational implementation in large scale testing programs, where it is important that adaptive tests be as parallel as possible to existing linear tests, a number of practical issues arise. This paper concerns three such issues. First, optimum item pool size is difficult to determine in advance of pool…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Standards
Peer reviewed Peer reviewed
Wainer, Howard – Educational Measurement: Issues and Practice, 1993
Some cautions are sounded for converting a linearly administered test to an adaptive format. Four areas are identified in which practices broadly used in traditionally constructed tests can have adverse effects if thoughtlessly adopted when a test is administered in an adaptive mode. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Educational Practices, Test Construction
Carlson, Sybil B.; Ward, William C. – 1988
Issues concerning the cost and feasibility of using Formulating Hypotheses (FH) test item types for the Graduate Record Examinations have slowed research into their use. This project focused on two major issues that need to be addressed in considering FH items for operational use: the costs of scoring and the assignment of scores along a range of…
Descriptors: Adaptive Testing, Computer Assisted Testing, Costs, Pilot Projects
Schnipke, Deborah L.; Reese, Lynda M. – 1997
Two-stage and multistage test designs provide a way of roughly adapting item difficulty to test-taker ability. All test takers take a parallel stage-one test, and, based on their scores, they are routed to tests of different difficulty levels in subsequent stages. These designs provide some of the benefits of standard computerized adaptive testing…
Descriptors: Ability, Adaptive Testing, Algorithms, Comparative Analysis
McArthur, David L. – 1985
The use of computers to build diagnostic inferences is explored in two contexts. In computerized monitoring of liquid oxygen systems for the space shuttle, diagnoses are exact because they can be derived within a world which is closed. In computerized classroom testing of reading comprehension, programs deliver a constrained form of adaptive…
Descriptors: Adaptive Testing, Classroom Research, Computer Assisted Testing, Diagnostic Tests
Previous Page | Next Page ยป
Pages: 1  |  2  |  3