Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 4 |
Descriptor
Computer Assisted Testing | 4 |
Interrater Reliability | 4 |
Item Response Theory | 4 |
Scoring | 3 |
Writing Evaluation | 2 |
Academic Achievement | 1 |
Accuracy | 1 |
Algebra | 1 |
Bias | 1 |
Biology | 1 |
College Students | 1 |
More ▼ |
Author
Dogan, Nuri | 1 |
Engelhard, George, Jr. | 1 |
Foltz, Peter | 1 |
He, Tung-hsien | 1 |
Huynh, Huynh | 1 |
Kim, Do-Hong | 1 |
Rosenstein, Mark | 1 |
Uysal, Ibrahim | 1 |
Wind, Stefanie A. | 1 |
Wolfe, Edward W. | 1 |
Publication Type
Journal Articles | 4 |
Reports - Research | 4 |
Education Level
High Schools | 1 |
Higher Education | 1 |
Middle Schools | 1 |
Postsecondary Education | 1 |
Audience
Location
Taiwan | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Uysal, Ibrahim; Dogan, Nuri – International Journal of Assessment Tools in Education, 2021
Scoring constructed-response items can be highly difficult, time-consuming, and costly in practice. Improvements in computer technology have enabled automated scoring of constructed-response items. However, the application of automated scoring without an investigation of test equating can lead to serious problems. The goal of this study was to…
Descriptors: Computer Assisted Testing, Scoring, Item Response Theory, Test Format
Wind, Stefanie A.; Wolfe, Edward W.; Engelhard, George, Jr.; Foltz, Peter; Rosenstein, Mark – International Journal of Testing, 2018
Automated essay scoring engines (AESEs) are becoming increasingly popular as an efficient method for performance assessments in writing, including many language assessments that are used worldwide. Before they can be used operationally, AESEs must be "trained" using machine-learning techniques that incorporate human ratings. However, the…
Descriptors: Computer Assisted Testing, Essay Tests, Writing Evaluation, Scoring
He, Tung-hsien – SAGE Open, 2019
This study employed a mixed-design approach and the Many-Facet Rasch Measurement (MFRM) framework to investigate whether rater bias occurred between the onscreen scoring (OSS) mode and the paper-based scoring (PBS) mode. Nine human raters analytically marked scanned scripts and paper scripts using a six-category (i.e., six-criterion) rating…
Descriptors: Computer Assisted Testing, Scoring, Item Response Theory, Essays
Kim, Do-Hong; Huynh, Huynh – Journal of Technology, Learning, and Assessment, 2007
This study examined comparability of student scores obtained from computerized and paper-and-pencil formats of the large-scale statewide end-of-course (EOC) examinations in the two subject areas of Algebra and Biology. Evidence in support of comparability of computerized and paper-based tests was sought by examining scale scores, item parameter…
Descriptors: Computer Assisted Testing, Measures (Individuals), Biology, Algebra