Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 5 |
Descriptor
Accuracy | 5 |
Adaptive Testing | 5 |
Difficulty Level | 5 |
Item Response Theory | 4 |
Bayesian Statistics | 3 |
Computation | 3 |
Computer Assisted Testing | 3 |
Comparative Analysis | 2 |
Test Bias | 2 |
Test Items | 2 |
Ability | 1 |
More ▼ |
Source
AERA Online Paper Repository | 1 |
Autism: The International… | 1 |
ETS Research Report Series | 1 |
Educational and Psychological… | 1 |
Journal of Educational… | 1 |
Author
Kim, Sooyeon | 2 |
Moses, Tim | 2 |
Coster, Wendy J. | 1 |
Dooley, Meghan | 1 |
He, Wei | 1 |
Hodi, Agnes | 1 |
Kao, Ying-Chia | 1 |
Kramer, Jessica M. | 1 |
Liljenquist, Kendra | 1 |
Magyar, Andrea | 1 |
Molnar, Gyongyver | 1 |
More ▼ |
Publication Type
Reports - Research | 5 |
Journal Articles | 4 |
Speeches/Meeting Papers | 1 |
Education Level
Elementary Education | 1 |
Grade 4 | 1 |
Grade 5 | 1 |
Intermediate Grades | 1 |
Middle Schools | 1 |
Audience
Location
Hungary | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Pediatric Evaluation of… | 1 |
What Works Clearinghouse Rating
Kim, Sooyeon; Moses, Tim; Yoo, Hanwook – Journal of Educational Measurement, 2015
This inquiry is an investigation of item response theory (IRT) proficiency estimators' accuracy under multistage testing (MST). We chose a two-stage MST design that includes four modules (one at Stage 1, three at Stage 2) and three difficulty paths (low, middle, high). We assembled various two-stage MST panels (i.e., forms) by manipulating two…
Descriptors: Comparative Analysis, Item Response Theory, Computation, Accuracy
Molnar, Gyongyver; Hodi, Agnes; Magyar, Andrea – AERA Online Paper Repository, 2016
Vocabulary knowledge assessment methods and instruments have gone through a significant evolution. Computer-based tests offer more opportunities than their paper-and-pencil counterparts, however, most digital vocabulary assessments are linear and adaptive solutions in this domain are scarce. The aims of this study were to compare the effectiveness…
Descriptors: Adaptive Testing, Vocabulary Skills, Computer Assisted Testing, Student Evaluation
Kim, Sooyeon; Moses, Tim; Yoo, Hanwook Henry – ETS Research Report Series, 2015
The purpose of this inquiry was to investigate the effectiveness of item response theory (IRT) proficiency estimators in terms of estimation bias and error under multistage testing (MST). We chose a 2-stage MST design in which 1 adaptation to the examinees' ability levels takes place. It includes 4 modules (1 at Stage 1, 3 at Stage 2) and 3 paths…
Descriptors: Item Response Theory, Computation, Statistical Bias, Error of Measurement
Coster, Wendy J.; Kramer, Jessica M.; Tian, Feng; Dooley, Meghan; Liljenquist, Kendra; Kao, Ying-Chia; Ni, Pengsheng – Autism: The International Journal of Research and Practice, 2016
The Pediatric Evaluation of Disability Inventory-Computer Adaptive Test is an alternative method for describing the adaptive function of children and youth with disabilities using a computer-administered assessment. This study evaluated the performance of the Pediatric Evaluation of Disability Inventory-Computer Adaptive Test with a national…
Descriptors: Autism, Pervasive Developmental Disorders, Computer Assisted Testing, Adaptive Testing
He, Wei; Reckase, Mark D. – Educational and Psychological Measurement, 2014
For computerized adaptive tests (CATs) to work well, they must have an item pool with sufficient numbers of good quality items. Many researchers have pointed out that, in developing item pools for CATs, not only is the item pool size important but also the distribution of item parameters and practical considerations such as content distribution…
Descriptors: Item Banks, Test Length, Computer Assisted Testing, Adaptive Testing