Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 8 |
Descriptor
Adaptive Testing | 18 |
Comparative Analysis | 18 |
Difficulty Level | 18 |
Computer Assisted Testing | 13 |
Test Items | 10 |
Item Response Theory | 8 |
Simulation | 6 |
Computation | 5 |
Test Format | 5 |
Bayesian Statistics | 3 |
Error of Measurement | 3 |
More ▼ |
Source
Journal of Educational… | 3 |
Applied Measurement in… | 1 |
ETS Research Report Series | 1 |
Eurasian Journal of… | 1 |
Perspectives in Education | 1 |
Practical Assessment,… | 1 |
Quality Assurance in… | 1 |
Author
Hansen, Duncan N. | 2 |
Kim, Sooyeon | 2 |
Moses, Tim | 2 |
Belur, Madhu N. | 1 |
Chaporkar, Prasanna | 1 |
Cohen, Allan S. | 1 |
Finney, Sara J. | 1 |
Gershon, Richard C. | 1 |
Hsu, Tse-Chi | 1 |
Kim, Seock-Ho | 1 |
Kirisci, Levent | 1 |
More ▼ |
Publication Type
Reports - Research | 13 |
Journal Articles | 9 |
Speeches/Meeting Papers | 6 |
Reports - Evaluative | 4 |
Reports - Descriptive | 1 |
Education Level
Grade 11 | 1 |
Grade 12 | 1 |
High Schools | 1 |
Secondary Education | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
Advanced Placement… | 1 |
What Works Clearinghouse Rating
Kárász, Judit T.; Széll, Krisztián; Takács, Szabolcs – Quality Assurance in Education: An International Perspective, 2023
Purpose: Based on the general formula, which depends on the length and difficulty of the test, the number of respondents and the number of ability levels, this study aims to provide a closed formula for the adaptive tests with medium difficulty (probability of solution is p = 1/2) to determine the accuracy of the parameters for each item and in…
Descriptors: Test Length, Probability, Comparative Analysis, Difficulty Level
Kim, Sooyeon; Moses, Tim; Yoo, Hanwook – Journal of Educational Measurement, 2015
This inquiry is an investigation of item response theory (IRT) proficiency estimators' accuracy under multistage testing (MST). We chose a two-stage MST design that includes four modules (one at Stage 1, three at Stage 2) and three difficulty paths (low, middle, high). We assembled various two-stage MST panels (i.e., forms) by manipulating two…
Descriptors: Comparative Analysis, Item Response Theory, Computation, Accuracy
Moothedath, Shana; Chaporkar, Prasanna; Belur, Madhu N. – Perspectives in Education, 2016
In recent years, the computerised adaptive test (CAT) has gained popularity over conventional exams in evaluating student capabilities with desired accuracy. However, the key limitation of CAT is that it requires a large pool of pre-calibrated questions. In the absence of such a pre-calibrated question bank, offline exams with uncalibrated…
Descriptors: Guessing (Tests), Computer Assisted Testing, Adaptive Testing, Maximum Likelihood Statistics
Kim, Sooyeon; Moses, Tim; Yoo, Hanwook Henry – ETS Research Report Series, 2015
The purpose of this inquiry was to investigate the effectiveness of item response theory (IRT) proficiency estimators in terms of estimation bias and error under multistage testing (MST). We chose a 2-stage MST design in which 1 adaptation to the examinees' ability levels takes place. It includes 4 modules (1 at Stage 1, 3 at Stage 2) and 3 paths…
Descriptors: Item Response Theory, Computation, Statistical Bias, Error of Measurement
Pohl, Steffi – Journal of Educational Measurement, 2013
This article introduces longitudinal multistage testing (lMST), a special form of multistage testing (MST), as a method for adaptive testing in longitudinal large-scale studies. In lMST designs, test forms of different difficulty levels are used, whereas the values on a pretest determine the routing to these test forms. Since lMST allows for…
Descriptors: Adaptive Testing, Longitudinal Studies, Difficulty Level, Comparative Analysis
Zhang, Jinming; Li, Jie – Journal of Educational Measurement, 2016
An IRT-based sequential procedure is developed to monitor items for enhancing test security. The procedure uses a series of statistical hypothesis tests to examine whether the statistical characteristics of each item under inspection have changed significantly during CAT administration. This procedure is compared with a previously developed…
Descriptors: Computer Assisted Testing, Test Items, Difficulty Level, Item Response Theory
Özyurt, Hacer; Özyurt, Özcan – Eurasian Journal of Educational Research, 2015
Problem Statement: Learning-teaching activities bring along the need to determine whether they achieve their goals. Thus, multiple choice tests addressing the same set of questions to all are frequently used. However, this traditional assessment and evaluation form contrasts with modern education, where individual learning characteristics are…
Descriptors: Probability, Adaptive Testing, Computer Assisted Testing, Item Response Theory
Thompson, Nathan A. – Practical Assessment, Research & Evaluation, 2011
Computerized classification testing (CCT) is an approach to designing tests with intelligent algorithms, similar to adaptive testing, but specifically designed for the purpose of classifying examinees into categories such as "pass" and "fail." Like adaptive testing for point estimation of ability, the key component is the…
Descriptors: Adaptive Testing, Computer Assisted Testing, Classification, Probability
Schnipke, Deborah L.; Reese, Lynda M. – 1997
Two-stage and multistage test designs provide a way of roughly adapting item difficulty to test-taker ability. All test takers take a parallel stage-one test, and, based on their scores, they are routed to tests of different difficulty levels in subsequent stages. These designs provide some of the benefits of standard computerized adaptive testing…
Descriptors: Ability, Adaptive Testing, Algorithms, Comparative Analysis

Ponsoda, Vicente; Olea, Julio; Rodriguez, Maria Soledad; Revuelta, Javier – Applied Measurement in Education, 1999
Compared easy and difficult versions of self-adapted tests (SAT) and computerized adapted tests. No significant differences were found among the tests for estimated ability or posttest state anxiety in studies with 187 Spanish high school students, although other significant differences were found. Discusses implications for interpreting test…
Descriptors: Ability, Adaptive Testing, Comparative Analysis, Computer Assisted Testing
Kim, Seock-Ho; Cohen, Allan S. – 1996
Applications of item response theory to practical testing problems including equating, differential item functioning, and computerized adaptive testing, require that item parameter estimates be placed onto a common metric. In this study, three methods for developing a common metric under item response theory are compared: (1) linking separate…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Difficulty Level
Roos, Linda L.; Wise, Steven L.; Finney, Sara J. – 1998
Previous studies have shown that, when administered a self-adapted test, a few examinees will choose item difficulty levels that are not well-matched to their proficiencies, resulting in high standard errors of proficiency estimation. This study investigated whether the previously observed effects of a self-adapted test--lower anxiety and higher…
Descriptors: Adaptive Testing, College Students, Comparative Analysis, Computer Assisted Testing
Gershon, Richard C. – 1989
Examinees (N=1,233) at the Johnson O'Connor Research Foundation (JOCRF) were administered one of three test forms in which only item order differed. The study was undertaken to determine the validity of the assumption underlying item response theory (IRT) that there are fixed item parameters that can predict performance. The Rasch IRT model was…
Descriptors: Academic Ability, Adaptive Testing, Adolescents, Adults
Wainer, Howard; And Others – 1991
When an examination consists, in whole or in part, of constructed response items, it is a common practice to allow the examinee to choose among a variety of questions. This procedure is usually adopted so that the limited number of items that can be completed in the allotted time does not unfairly affect the examinee. This results in the de facto…
Descriptors: Adaptive Testing, Chemistry, Comparative Analysis, Computer Assisted Testing
Stone, Gregory Ethan; Lunz, Mary E. – 1994
This paper explores the comparability of item calibrations for three types of items: (1) text only; (2) text with photographs; and (3) text plus graphics when items are presented on written tests and computerized adaptive tests. Data are from five different medical technology certification examinations administered nationwide in 1993. The Rasch…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Diagrams
Previous Page | Next Page »
Pages: 1 | 2