Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 4 |
Descriptor
Computer Assisted Testing | 5 |
Scores | 5 |
Correlation | 2 |
Test Bias | 2 |
Test Items | 2 |
Accuracy | 1 |
Adaptive Testing | 1 |
Arithmetic | 1 |
Automation | 1 |
Climate | 1 |
Comparative Analysis | 1 |
More ▼ |
Source
Educational Assessment | 5 |
Author
Belur, Vinetha | 1 |
Custer, Michael | 1 |
Ghaderi, Ata | 1 |
Hassler Hallstedt, Martin | 1 |
Huynh, Huynh | 1 |
Kim, Do-Hong | 1 |
Kolen, Michael J. | 1 |
Lee, Hee-Sun | 1 |
Liu, Ou Lydia | 1 |
Mao, Liyang | 1 |
Mulholland, Matthew | 1 |
More ▼ |
Publication Type
Journal Articles | 5 |
Reports - Evaluative | 2 |
Reports - Research | 2 |
Reports - Descriptive | 1 |
Education Level
Secondary Education | 2 |
Grade 3 | 1 |
Grade 8 | 1 |
Grade 9 | 1 |
Primary Education | 1 |
Audience
Location
Sweden | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Hassler Hallstedt, Martin; Ghaderi, Ata – Educational Assessment, 2018
Tablets can be used to facilitate systematic testing of academic skills. Yet, when using validated paper tests on tablet, comparability between the mediums must be established. Comparability between a tablet and a paper version of a basic math skills test (HRT: Heidelberger Rechen Test 1-4) was investigated. Five samples with second and third…
Descriptors: Handheld Devices, Scores, Test Format, Computer Assisted Testing
Mao, Liyang; Liu, Ou Lydia; Roohr, Katrina; Belur, Vinetha; Mulholland, Matthew; Lee, Hee-Sun; Pallant, Amy – Educational Assessment, 2018
Scientific argumentation is one of the core practices for teachers to implement in science classrooms. We developed a computer-based formative assessment to support students' construction and revision of scientific arguments. The assessment is built upon automated scoring of students' arguments and provides feedback to students and teachers.…
Descriptors: Computer Assisted Testing, Science Tests, Scoring, Automation
Kim, Do-Hong; Huynh, Huynh – Educational Assessment, 2010
This study investigated whether scores obtained from the online and paper-and-pencil administrations of the statewide end-of-course English test were equivalent for students with and without disabilities. Score comparability was evaluated by examining equivalence of factor structure (measurement invariance) and differential item and bundle…
Descriptors: Computer Assisted Testing, Language Tests, English, Scores
Kolen, Michael J. – Educational Assessment, 1999
Develops a conceptual framework that addresses score comparability for performance assessments, adaptive tests, paper-and-pencil tests, and alternate item pools for computerized tests. Outlines testing situation aspects that might threaten score comparability and describes procedures for evaluating the degree of score comparability. Suggests ways…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Performance Based Assessment
Pomplun, Mark; Ritchie, Timothy; Custer, Michael – Educational Assessment, 2006
This study investigated factors related to score differences on computerized and paper-and-pencil versions of a series of primary K-3 reading tests. Factors studied included item and student characteristics. The results suggest that the score differences were more related to student than item characteristics. These student characteristics include…
Descriptors: Reading Tests, Student Characteristics, Response Style (Tests), Socioeconomic Status