Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 4 |
Descriptor
Adaptive Testing | 5 |
Item Response Theory | 5 |
Computer Assisted Testing | 3 |
Bayesian Statistics | 2 |
Scores | 2 |
Standardized Tests | 2 |
Test Items | 2 |
Test Reliability | 2 |
Test Validity | 2 |
Ability | 1 |
Achievement Tests | 1 |
More ▼ |
Source
ETS Research Report Series | 1 |
Florida Center for Reading… | 1 |
Journal of Educational and… | 1 |
Partnership for Assessment of… | 1 |
Author
Chang, Hua-Hua | 1 |
Foorman, Barbara R. | 1 |
Glas, Cees A. W. | 1 |
Kang, Hyeon-Ah | 1 |
Kim, Sooyeon | 1 |
Moses, Tim | 1 |
Petscher, Yaacov | 1 |
Schatschneider, Chris | 1 |
Vos, Hans J. | 1 |
Zheng, Yi | 1 |
Publication Type
Numerical/Quantitative Data | 5 |
Journal Articles | 2 |
Reports - Evaluative | 2 |
Reports - Research | 2 |
Guides - General | 1 |
Education Level
Elementary Secondary Education | 1 |
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Location
Florida | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 1 |
Peabody Picture Vocabulary… | 1 |
Stanford Achievement Tests | 1 |
What Works Clearinghouse Rating
Kang, Hyeon-Ah; Zheng, Yi; Chang, Hua-Hua – Journal of Educational and Behavioral Statistics, 2020
With the widespread use of computers in modern assessment, online calibration has become increasingly popular as a way of replenishing an item pool. The present study discusses online calibration strategies for a joint model of responses and response times. The study proposes likelihood inference methods for item paramter estimation and evaluates…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Response Theory, Reaction Time
Kim, Sooyeon; Moses, Tim – ETS Research Report Series, 2016
The purpose of this study is to evaluate the extent to which item response theory (IRT) proficiency estimation methods are robust to the presence of aberrant responses under the "GRE"® General Test multistage adaptive testing (MST) design. To that end, a wide range of atypical response behaviors affecting as much as 10% of the test items…
Descriptors: Item Response Theory, Computation, Robustness (Statistics), Response Style (Tests)
Foorman, Barbara R.; Petscher, Yaacov; Schatschneider, Chris – Florida Center for Reading Research, 2015
The FAIR-FS consists of computer-adaptive reading comprehension and oral language screening tasks that provide measures to track growth over time, as well as a Probability of Literacy Success (PLS) linked to grade-level performance (i.e., the 40th percentile) on the reading comprehension subtest of the Stanford Achievement Test (SAT-10) in the…
Descriptors: Reading Instruction, Screening Tests, Reading Comprehension, Oral Language
Partnership for Assessment of Readiness for College and Careers, 2016
The Partnership for Assessment of Readiness for College and Careers (PARCC) is a state-led consortium designed to create next-generation assessments that, compared to traditional K-12 assessments, more accurately measure student progress toward college and career readiness. The PARCC assessments are aligned to the Common Core State Standards…
Descriptors: Standardized Tests, Career Readiness, College Readiness, Test Validity
Glas, Cees A. W.; Vos, Hans J. – 1998
A version of sequential mastery testing is studied in which response behavior is modeled by an item response theory (IRT) model. First, a general theoretical framework is sketched that is based on a combination of Bayesian sequential decision theory and item response theory. A discussion follows on how IRT based sequential mastery testing can be…
Descriptors: Adaptive Testing, Bayesian Statistics, Item Response Theory, Mastery Tests