Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 4 |
Descriptor
Computer Assisted Testing | 6 |
Item Response Theory | 6 |
Sampling | 6 |
Adaptive Testing | 3 |
Elementary Secondary Education | 2 |
Foreign Countries | 2 |
Models | 2 |
Test Construction | 2 |
Test Items | 2 |
Ability | 1 |
Accuracy | 1 |
More ▼ |
Author
Publication Type
Journal Articles | 4 |
Reports - Research | 4 |
Reports - Evaluative | 2 |
Speeches/Meeting Papers | 2 |
Numerical/Quantitative Data | 1 |
Education Level
Elementary Secondary Education | 1 |
Higher Education | 1 |
Audience
Location
Germany | 1 |
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of… | 1 |
Trends in International… | 1 |
What Works Clearinghouse Rating
Kang, Hyeon-Ah; Zheng, Yi; Chang, Hua-Hua – Journal of Educational and Behavioral Statistics, 2020
With the widespread use of computers in modern assessment, online calibration has become increasingly popular as a way of replenishing an item pool. The present study discusses online calibration strategies for a joint model of responses and response times. The study proposes likelihood inference methods for item paramter estimation and evaluates…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Response Theory, Reaction Time
Giada Spaccapanico Proietti; Mariagiulia Matteucci; Stefania Mignani; Bernard P. Veldkamp – Journal of Educational and Behavioral Statistics, 2024
Classical automated test assembly (ATA) methods assume fixed and known coefficients for the constraints and the objective function. This hypothesis is not true for the estimates of item response theory parameters, which are crucial elements in test assembly classical models. To account for uncertainty in ATA, we propose a chance-constrained…
Descriptors: Automation, Computer Assisted Testing, Ambiguity (Context), Item Response Theory
Rausch, Andreas; Seifried, Juergen; Koegler, Kristina; Brandt, Steffen; Eigenmann, Rebecca; Siegfried, Christin – AERA Online Paper Repository, 2016
Although non-cognitive facets--such as interest, attitudes, commitment, self-concept and so on--of are prevalent in contemporary theoretical modeling of competence, they are often neglected in measurement approaches or measured only by global self-report questionnaires. Based on the well-established experience sampling method (ESM) and following…
Descriptors: Computer Assisted Testing, Problem Solving, Measurement, Sampling
Rudner, Lawrence M.; Guo, Fanmin – Journal of Applied Testing Technology, 2011
This study investigates measurement decision theory (MDT) as an underlying model for computer adaptive testing when the goal is to classify examinees into one of a finite number of groups. The first analysis compares MDT with a popular item response theory model and finds little difference in terms of the percentage of correct classifications. The…
Descriptors: Adaptive Testing, Instructional Systems, Item Response Theory, Computer Assisted Testing

Bradlow, Eric T. – Journal of Educational and Behavioral Statistics, 1996
The three-parameter logistic (3-PL) model is described and a derivation of the 3-PL observed information function is presented for a single binary response from one examinee with known item parameters. Formulas are presented for the probability of negative information and for the expected information (always nonnegative). (SLD)
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Item Response Theory
Allen, Nancy L.; Donoghue, John R. – 1995
This Monte Carlo study examined the effect of complex sampling of items on the measurement of differential item functioning (DIF) using the Mantel-Haenszel procedure. Data were generated using a three-parameter logistic item response theory model according to the balanced incomplete block (BIB) design used in the National Assessment of Educational…
Descriptors: Computer Assisted Testing, Difficulty Level, Elementary Secondary Education, Identification