Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 8 |
Descriptor
Computer Assisted Testing | 9 |
Interrater Reliability | 9 |
Test Format | 9 |
College Students | 3 |
Computer Software | 3 |
Essay Tests | 3 |
Foreign Countries | 3 |
Scores | 3 |
Scoring | 3 |
Construct Validity | 2 |
Difficulty Level | 2 |
More ▼ |
Source
International Journal of… | 2 |
Journal of Educational… | 2 |
ALT-J: Research in Learning… | 1 |
ETS Research Report Series | 1 |
Journal of Educational Data… | 1 |
Online Submission | 1 |
Author
Aydin, Belgin | 1 |
Burk, John | 1 |
Casabianca, Jodi M. | 1 |
Clariana, Roy B. | 1 |
Dogan, Nuri | 1 |
Edward Paul Getman | 1 |
Isler, Cemre | 1 |
Kim, Kerry J. | 1 |
Lawless, René R. | 1 |
Ma, Yanxia A. | 1 |
Manalo, Jonathan R. | 1 |
More ▼ |
Publication Type
Journal Articles | 7 |
Reports - Research | 5 |
Reports - Descriptive | 2 |
Dissertations/Theses -… | 1 |
Reports - Evaluative | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Higher Education | 5 |
Postsecondary Education | 5 |
Elementary Education | 1 |
Audience
Location
Louisiana | 1 |
Turkey | 1 |
United Kingdom (Scotland) | 1 |
Laws, Policies, & Programs
Pell Grant Program | 1 |
Assessments and Surveys
Test of English as a Foreign… | 2 |
ACT Assessment | 1 |
What Works Clearinghouse Rating
McCaffrey, Daniel F.; Casabianca, Jodi M.; Ricker-Pedley, Kathryn L.; Lawless, René R.; Wendler, Cathy – ETS Research Report Series, 2022
This document describes a set of best practices for developing, implementing, and maintaining the critical process of scoring constructed-response tasks. These practices address both the use of human raters and automated scoring systems as part of the scoring process and cover the scoring of written, spoken, performance, or multimodal responses.…
Descriptors: Best Practices, Scoring, Test Format, Computer Assisted Testing
Uysal, Ibrahim; Dogan, Nuri – International Journal of Assessment Tools in Education, 2021
Scoring constructed-response items can be highly difficult, time-consuming, and costly in practice. Improvements in computer technology have enabled automated scoring of constructed-response items. However, the application of automated scoring without an investigation of test equating can lead to serious problems. The goal of this study was to…
Descriptors: Computer Assisted Testing, Scoring, Item Response Theory, Test Format
Isler, Cemre; Aydin, Belgin – International Journal of Assessment Tools in Education, 2021
This study is about the development and validation process of the Computerized Oral Proficiency Test of English as a Foreign Language (COPTEFL). The test aims at assessing the speaking proficiency levels of students in Anadolu University School of Foreign Languages (AUSFL). For this purpose, three monologic tasks were developed based on the Global…
Descriptors: Test Construction, Construct Validity, Interrater Reliability, Scores
Smolinsky, Lawrence; Marx, Brian D.; Olafsson, Gestur; Ma, Yanxia A. – Journal of Educational Computing Research, 2020
Computer-based testing is an expanding use of technology offering advantages to teachers and students. We studied Calculus II classes for science, technology, engineering, and mathematics majors using different testing modes. Three sections with 324 students employed: paper-and-pencil testing, computer-based testing, and both. Computer tests gave…
Descriptors: Test Format, Computer Assisted Testing, Paper (Material), Calculus
Kim, Kerry J.; Meir, Eli; Pope, Denise S.; Wendel, Daniel – Journal of Educational Data Mining, 2017
Computerized classification of student answers offers the possibility of instant feedback and improved learning. Open response (OR) questions provide greater insight into student thinking and understanding than more constrained multiple choice (MC) questions, but development of automated classifiers is more difficult, often requiring training a…
Descriptors: Classification, Computer Assisted Testing, Multiple Choice Tests, Test Format
Edward Paul Getman – Online Submission, 2020
Despite calls for engaging assessments targeting young language learners (YLLs) between 8 and 13 years old, what makes assessment tasks engaging and how such task characteristics affect measurement quality have not been well studied empirically. Furthermore, there has been a dearth of validity research about technology-enhanced speaking tests for…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Learner Engagement
Mogey, Nora; Paterson, Jessie; Burk, John; Purcell, Michael – ALT-J: Research in Learning Technology, 2010
Students at the University of Edinburgh do almost all their work on computers, but at the end of the semester they are examined by handwritten essays. Intuitively it would be appealing to allow students the choice of handwriting or typing, but this raises a concern that perhaps this might not be "fair"--that the choice a student makes,…
Descriptors: Handwriting, Essay Tests, Interrater Reliability, Grading
Manalo, Jonathan R.; Wolfe, Edward W. – 2000
Recently, the Test of English as a Foreign Language (TOEFL) changed by including a writing section that gives the examinee an option between computer and handwritten formats to compose their responses. Unfortunately, this may introduce several potential sources of error that might reduce the reliability and validity of the scores. The seriousness…
Descriptors: Computer Assisted Testing, Essay Tests, Evaluators, Handwriting
Clariana, Roy B.; Wallace, Patricia – Journal of Educational Computing Research, 2007
This proof-of-concept investigation describes a computer-based approach for deriving the knowledge structure of individuals and of groups from their written essays, and considers the convergent criterion-related validity of the computer-based scores relative to human rater essay scores and multiple-choice test scores. After completing a…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Construct Validity, Cognitive Structures