NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 13 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Carol Eckerly; Yue Jia; Paul Jewsbury – ETS Research Report Series, 2022
Testing programs have explored the use of technology-enhanced items alongside traditional item types (e.g., multiple-choice and constructed-response items) as measurement evidence of latent constructs modeled with item response theory (IRT). In this report, we discuss considerations in applying IRT models to a particular type of adaptive testlet…
Descriptors: Computer Assisted Testing, Test Items, Item Response Theory, Scoring
Hong Jiao, Editor; Robert W. Lissitz, Editor – IAP - Information Age Publishing, Inc., 2024
With the exponential increase of digital assessment, different types of data in addition to item responses become available in the measurement process. One of the salient features in digital assessment is that process data can be easily collected. This non-conventional structured or unstructured data source may bring new perspectives to better…
Descriptors: Artificial Intelligence, Natural Language Processing, Psychometrics, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Huebner, Alan – Practical Assessment, Research & Evaluation, 2010
Cognitive diagnostic modeling has become an exciting new field of psychometric research. These models aim to diagnose examinees' mastery status of a group of discretely defined skills, or attributes, thereby providing them with detailed information regarding their specific strengths and weaknesses. Combining cognitive diagnosis with computer…
Descriptors: Cognitive Tests, Diagnostic Tests, Computer Assisted Testing, Adaptive Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rock, Donald A. – ETS Research Report Series, 2012
This paper provides a history of ETS's role in developing assessment instruments and psychometric procedures for measuring change in large-scale national assessments funded by the Longitudinal Studies branch of the National Center for Education Statistics. It documents the innovations developed during more than 30 years of working with…
Descriptors: Models, Educational Change, Longitudinal Studies, Educational Development
Peer reviewed Peer reviewed
Reise, Steven P. – Applied Psychological Measurement, 2001
This book contains a series of research articles about computerized adaptive testing (CAT) written for advanced psychometricians. The book is divided into sections on: (1) item selection and examinee scoring in CAT; (2) examples of CAT applications; (3) item banks; (4) determining model fit; and (5) using testlets in CAT. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Goodness of Fit, Item Banks
Betz, Nancy E.; Weiss, David J. – 1973
A two-stage adaptive test and a conventional peaked test were constructed and administered on a time-shared computer system to students in undergraduate psychology courses. (The two-stage adaptive test consisted of a routing test followed by one of a series of measurement tests.) Comparison of the score distributions showed that the two-stage test…
Descriptors: Adaptive Testing, Computer Assisted Testing, Individual Testing, Individualized Programs
Stocking, Martha L. – 1994
Modern applications of computerized adaptive testing (CAT) are typically grounded in item response theory (IRT; Lord, 1980). While the IRT foundations of adaptive testing provide a number of approaches to adaptive test scoring that may seem natural and efficient to psychometricians, these approaches may be more demanding for test takers, test…
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Equated Scores
Schnipke, Deborah L.; Reese, Lynda M. – 1997
Two-stage and multistage test designs provide a way of roughly adapting item difficulty to test-taker ability. All test takers take a parallel stage-one test, and, based on their scores, they are routed to tests of different difficulty levels in subsequent stages. These designs provide some of the benefits of standard computerized adaptive testing…
Descriptors: Ability, Adaptive Testing, Algorithms, Comparative Analysis
Slater, Sharon C.; Schaeffer, Gary A. – 1996
The General Computer Adaptive Test (CAT) of the Graduate Record Examinations (GRE) includes three operational sections that are separately timed and scored. A "no score" is reported if the examinee answers fewer than 80% of the items or if the examinee does not answer all of the items and leaves the section before time expires. The 80%…
Descriptors: Adaptive Testing, College Students, Computer Assisted Testing, Equal Education
McBride, James R. – 1979
In an adaptive test, the test administrator chooses test items sequentially during the test, in such a way as to adapt test difficulty to examinee ability as shown during testing. An effectively designed adaptive test can resolve the dilemma inherent in conventional test design. By tailoring tests to individuals, the adaptive test can…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Military Personnel
PDF pending restoration PDF pending restoration
Mills, Craig N.; Stocking, Martha L. – 1995
Computerized adaptive testing (CAT), while well-grounded in psychometric theory, has had few large-scale applications for high-stakes, secure tests in the past. This is now changing as the cost of computing has declined rapidly. As is always true where theory is translated into practice, many practical issues arise. This paper discusses a number…
Descriptors: Adaptive Testing, Computer Assisted Testing, High Stakes Tests, Item Banks
Lord, Frederic M. – 1971
Some stochastic approximation procedures are considered in relation to the problem of choosing a sequence of test questions to accurately estimate a given examinee's standing on a psychological dimension. Illustrations are given evaluating certain procedures in a specific context. (Author/CK)
Descriptors: Academic Ability, Adaptive Testing, Computer Programs, Difficulty Level
Rizavi, Saba; Way, Walter D.; Davey, Tim; Herbert, Erin – Educational Testing Service, 2004
Item parameter estimates vary for a variety of reasons, including estimation error, characteristics of the examinee samples, and context effects (e.g., item location effects, section location effects, etc.). Although we expect variation based on theory, there is reason to believe that observed variation in item parameter estimates exceeds what…
Descriptors: Adaptive Testing, Test Items, Computation, Context Effect