Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 3 |
Descriptor
Data Collection | 3 |
Test Construction | 3 |
Sample Size | 2 |
Scores | 2 |
Computation | 1 |
Computer Assisted Testing | 1 |
Educational Assessment | 1 |
Equated Scores | 1 |
Error of Measurement | 1 |
Evaluation Criteria | 1 |
Item Analysis | 1 |
More ▼ |
Source
ETS Research Report Series | 3 |
Author
Daniel F. McCaffrey | 1 |
Grant, Mary | 1 |
Hongwen Guo | 1 |
Lixong Gu | 1 |
Matthew S. Johnson | 1 |
McHale, Fred | 1 |
Moses, Tim | 1 |
Puhan, Gautam | 1 |
Rotou, Ourania | 1 |
Rupp, André A. | 1 |
Publication Type
Journal Articles | 3 |
Reports - Research | 2 |
Information Analyses | 1 |
Reports - Descriptive | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Hongwen Guo; Matthew S. Johnson; Daniel F. McCaffrey; Lixong Gu – ETS Research Report Series, 2024
The multistage testing (MST) design has been gaining attention and popularity in educational assessments. For testing programs that have small test-taker samples, it is challenging to calibrate new items to replenish the item pool. In the current research, we used the item pools from an operational MST program to illustrate how research studies…
Descriptors: Test Items, Test Construction, Sample Size, Scaling
Rotou, Ourania; Rupp, André A. – ETS Research Report Series, 2020
This research report provides a description of the processes of evaluating the "deployability" of automated scoring (AS) systems from the perspective of large-scale educational assessments in operational settings. It discusses a comprehensive psychometric evaluation that entails analyses that take into consideration the specific purpose…
Descriptors: Computer Assisted Testing, Scoring, Educational Assessment, Psychometrics
Puhan, Gautam; Moses, Tim; Grant, Mary; McHale, Fred – ETS Research Report Series, 2008
A single group (SG) equating design with nearly equivalent test forms (SiGNET) design was developed by Grant (2006) to equate small volume tests. The basis of this design is that examinees take two largely overlapping test forms within a single administration. The scored items for the operational form are divided into mini-tests called testlets.…
Descriptors: Data Collection, Equated Scores, Item Sampling, Sample Size