Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 5 |
Descriptor
Comparative Testing | 5 |
Educational Testing | 5 |
Test Format | 5 |
Test Bias | 4 |
Test Items | 4 |
Computer Assisted Testing | 3 |
Educational Technology | 3 |
Methods Research | 3 |
Printed Materials | 3 |
Student Evaluation | 3 |
Test Content | 3 |
More ▼ |
Author
Allen, Nancy | 1 |
Bennett, Randy Elliott | 1 |
Drake, Samuel | 1 |
Green, Sylvia | 1 |
Gu, Lixiong | 1 |
Horkay, Nancy | 1 |
Johnson, Martin | 1 |
Kaplan, Bruce | 1 |
Kim, Sooyeon | 1 |
Mazerik, Matthew B. | 1 |
McHale, Frederick | 1 |
More ▼ |
Publication Type
Journal Articles | 4 |
Reports - Research | 3 |
Dissertations/Theses -… | 1 |
Reports - Evaluative | 1 |
Tests/Questionnaires | 1 |
Education Level
Elementary Secondary Education | 3 |
Elementary Education | 2 |
Grade 4 | 1 |
Grade 5 | 1 |
Grade 8 | 1 |
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Location
California | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 1 |
What Works Clearinghouse Rating
Kim, Sooyeon; Walker, Michael E.; McHale, Frederick – Journal of Educational Measurement, 2010
In this study we examined variations of the nonequivalent groups equating design for tests containing both multiple-choice (MC) and constructed-response (CR) items to determine which design was most effective in producing equivalent scores across the two tests to be equated. Using data from a large-scale exam, this study investigated the use of…
Descriptors: Measures (Individuals), Scoring, Equated Scores, Test Bias
Gu, Lixiong; Drake, Samuel; Wolfe, Edward W. – Journal of Technology, Learning, and Assessment, 2006
This study seeks to determine whether item features are related to observed differences in item difficulty (DIF) between computer- and paper-based test delivery media. Examinees responded to 60 quantitative items similar to those found on the GRE general test in either a computer-based or paper-based medium. Thirty-eight percent of the items were…
Descriptors: Test Bias, Test Items, Educational Testing, Student Evaluation
Mazerik, Matthew B. – Online Submission, 2006
The mean scores of English Language Learners (ELL) and English Only (EO) students in 4th and 5th grade (N = 110), across the teacher-administered Grammar Skills Test, were examined for differences in participants' scores on assessments containing single-step directions and assessments containing multiple-step directions. The results indicated no…
Descriptors: Second Language Learning, Grade 5, Language Proficiency, Educational Testing
Johnson, Martin; Green, Sylvia – Journal of Technology, Learning, and Assessment, 2006
The transition from paper-based to computer-based assessment raises a number of important issues about how mode might affect children's performance and question answering strategies. In this project 104 eleven-year-olds were given two sets of matched mathematics questions, one set on-line and the other on paper. Facility values were analyzed to…
Descriptors: Student Attitudes, Computer Assisted Testing, Program Effectiveness, Elementary School Students
Horkay, Nancy; Bennett, Randy Elliott; Allen, Nancy; Kaplan, Bruce; Yan, Fred – Journal of Technology, Learning, and Assessment, 2006
This study investigated the comparability of scores for paper and computer versions of a writing test administered to eighth grade students. Two essay prompts were given on paper to a nationally representative sample as part of the 2002 main NAEP writing assessment. The same two essay prompts were subsequently administered on computer to a second…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Program Effectiveness