Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 7 |
Descriptor
Adaptive Testing | 7 |
Comparative Analysis | 7 |
Computer Assisted Testing | 7 |
Measurement | 7 |
Test Items | 4 |
Accuracy | 3 |
Item Banks | 3 |
Efficiency | 2 |
Foreign Countries | 2 |
Item Response Theory | 2 |
Methods | 2 |
More ▼ |
Author
Dodd, Barbara G. | 3 |
Chang, Hua-Hua | 1 |
Chen, Li-Ju | 1 |
Cheng, Ying | 1 |
Chung, Hyewon | 1 |
Douglas, Jeffrey | 1 |
Frey, Andreas | 1 |
Galindo, Jennifer L. | 1 |
Guo, Fanmin | 1 |
Ho, Rong-Guey | 1 |
Kim, Jiseon | 1 |
More ▼ |
Publication Type
Journal Articles | 7 |
Reports - Research | 5 |
Reports - Evaluative | 2 |
Education Level
Secondary Education | 3 |
Elementary Education | 1 |
High Schools | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 1 |
What Works Clearinghouse Rating
Soland, James; Kuhfeld, Megan; Rios, Joseph – Large-scale Assessments in Education, 2021
Low examinee effort is a major threat to valid uses of many test scores. Fortunately, several methods have been developed to detect noneffortful item responses, most of which use response times. To accurately identify noneffortful responses, one must set response time thresholds separating those responses from effortful ones. While other studies…
Descriptors: Reaction Time, Measurement, Response Style (Tests), Reading Tests
Park, Ryoungsun; Kim, Jiseon; Chung, Hyewon; Dodd, Barbara G. – Educational and Psychological Measurement, 2017
The current study proposes novel methods to predict multistage testing (MST) performance without conducting simulations. This method, called MST test information, is based on analytic derivation of standard errors of ability estimates across theta levels. We compared standard errors derived analytically to the simulation results to demonstrate the…
Descriptors: Testing, Performance, Prediction, Error of Measurement
Lee, HwaYoung; Dodd, Barbara G. – Educational and Psychological Measurement, 2012
This study investigated item exposure control procedures under various combinations of item pool characteristics and ability distributions in computerized adaptive testing based on the partial credit model. Three variables were manipulated: item pool characteristics (120 items for each of easy, medium, and hard item pools), two ability…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Ability
Moyer, Eric L.; Galindo, Jennifer L.; Dodd, Barbara G. – Educational and Psychological Measurement, 2012
Managing test specifications--both multiple nonstatistical constraints and flexibly defined constraints--has become an important part of designing item selection procedures for computerized adaptive tests (CATs) in achievement testing. This study compared the effectiveness of three procedures: constrained CAT, flexible modified constrained CAT,…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Items, Item Analysis
Frey, Andreas; Seitz, Nicki-Nils – Educational and Psychological Measurement, 2011
The usefulness of multidimensional adaptive testing (MAT) for the assessment of student literacy in the Programme for International Student Assessment (PISA) was examined within a real data simulation study. The responses of N = 14,624 students who participated in the PISA assessments of the years 2000, 2003, and 2006 in Germany were used to…
Descriptors: Adaptive Testing, Literacy, Academic Achievement, Achievement Tests
Yen, Yung-Chin; Ho, Rong-Guey; Laio, Wen-Wei; Chen, Li-Ju; Kuo, Ching-Chin – Applied Psychological Measurement, 2012
In a selected response test, aberrant responses such as careless errors and lucky guesses might cause error in ability estimation because these responses do not actually reflect the knowledge that examinees possess. In a computerized adaptive test (CAT), these aberrant responses could further cause serious estimation error due to dynamic item…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Response Style (Tests)
Cheng, Ying; Chang, Hua-Hua; Douglas, Jeffrey; Guo, Fanmin – Educational and Psychological Measurement, 2009
a-stratification is a method that utilizes items with small discrimination (a) parameters early in an exam and those with higher a values when more is learned about the ability parameter. It can achieve much better item usage than the maximum information criterion (MIC). To make a-stratification more practical and more widely applicable, a method…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection