Publication Date
In 2025 | 1 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 8 |
Since 2016 (last 10 years) | 24 |
Descriptor
Computer Assisted Testing | 24 |
Essay Tests | 24 |
Scoring | 14 |
Automation | 12 |
Writing Evaluation | 12 |
Scores | 10 |
Accuracy | 6 |
Foreign Countries | 6 |
Writing Tests | 6 |
Correlation | 5 |
Undergraduate Students | 5 |
More ▼ |
Source
Author
Litman, Diane | 2 |
Rupp, André A. | 2 |
Zhang, Mo | 2 |
Abbasian, Gholam-Reza | 1 |
Almusharraf, Norah | 1 |
Alotaibi, Hind | 1 |
Aota, Shoma | 1 |
Bai, Lifang | 1 |
Behizadeh, Nadia | 1 |
Bejar, Isaac I. | 1 |
Belur, Vinetha | 1 |
More ▼ |
Publication Type
Journal Articles | 21 |
Reports - Research | 20 |
Reports - Descriptive | 3 |
Speeches/Meeting Papers | 2 |
Guides - General | 1 |
Reports - Evaluative | 1 |
Education Level
Audience
Practitioners | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 2 |
Raven Progressive Matrices | 1 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Cleophas, Catherine; Hönnige, Christoph; Meisel, Frank; Meyer, Philipp – INFORMS Transactions on Education, 2023
As the COVID-19 pandemic motivated a shift to virtual teaching, exams have increasingly moved online too. Detecting cheating through collusion is not easy when tech-savvy students take online exams at home and on their own devices. Such online at-home exams may tempt students to collude and share materials and answers. However, online exams'…
Descriptors: Computer Assisted Testing, Cheating, Identification, Essay Tests
Santi Lestari – Research Matters, 2024
Despite the increasing ubiquity of computer-based tests, many general qualifications examinations remain in a paper-based mode. Insufficient and unequal digital provision across schools is often identified as a major barrier to a full adoption of computer-based exams for general qualifications. One way to overcome this barrier is a gradual…
Descriptors: Keyboarding (Data Entry), Handwriting, Test Format, Comparative Analysis
Almusharraf, Norah; Alotaibi, Hind – Technology, Knowledge and Learning, 2023
Evaluating written texts is believed to be a time-consuming process that can lack consistency and objectivity. Automated essay scoring (AES) can provide solutions to some of the limitations of human scoring. This research aimed to evaluate the performance of one AES system, Grammarly, in comparison to human raters. Both approaches' performances…
Descriptors: Writing Evaluation, Writing Tests, Essay Tests, Essays
Wijanarko, Bambang Dwi; Heryadi, Yaya; Toba, Hapnes; Budiharto, Widodo – Education and Information Technologies, 2021
Automated question generation is a task to generate questions from structured or unstructured data. The increasing popularity of online learning in recent years has given momentum to automated question generation in education field for facilitating learning process, learning material retrieval, and computer-based testing. This paper report on the…
Descriptors: Foreign Countries, Undergraduate Students, Engineering Education, Computer Software
Zhang, Haoran; Litman, Diane – Grantee Submission, 2020
While automated essay scoring (AES) can reliably grade essays at scale, automated writing evaluation (AWE) additionally provides formative feedback to guide essay revision. However, a neural AES typically does not provide useful feature representations for supporting AWE. This paper presents a method for linking AWE and neural AES, by extracting…
Descriptors: Computer Assisted Testing, Scoring, Essay Tests, Writing Evaluation
Chan, Kinnie Kin Yee; Bond, Trevor; Yan, Zi – Language Testing, 2023
We investigated the relationship between the scores assigned by an Automated Essay Scoring (AES) system, the Intelligent Essay Assessor (IEA), and grades allocated by trained, professional human raters to English essay writing by instigating two procedures novel to written-language assessment: the logistic transformation of AES raw scores into…
Descriptors: Computer Assisted Testing, Essays, Scoring, Scores
Okky Riswandha Imawan; Heri Retnawati; Haryanto; Raoda Ismail – Journal of Education and e-Learning Research, 2025
This study explores the challenges of implementing computerized adaptive testing (CAT) for mathematics assessment among prospective elementary school teachers in Indonesia. It aims to describe (1) assessment practices of mathematics lecturers and (2) challenges in adopting CAT. Using a qualitative phenomenological approach, data were collected…
Descriptors: Barriers, Computer Assisted Testing, Mathematics Tests, Preservice Teachers
Reinertsen, Nathanael – English in Australia, 2018
The difference in how humans read and how Automated Essay Scoring (AES) systems process written language leads to a situation where a portion of student responses will be comprehensible to human markers, but unable to be parsed by AES systems. This paper examines a number of pieces of student writing that were marked by trained human markers, but…
Descriptors: Qualitative Research, Writing Evaluation, Essay Tests, Computer Assisted Testing
Wind, Stefanie A.; Wolfe, Edward W.; Engelhard, George, Jr.; Foltz, Peter; Rosenstein, Mark – International Journal of Testing, 2018
Automated essay scoring engines (AESEs) are becoming increasingly popular as an efficient method for performance assessments in writing, including many language assessments that are used worldwide. Before they can be used operationally, AESEs must be "trained" using machine-learning techniques that incorporate human ratings. However, the…
Descriptors: Computer Assisted Testing, Essay Tests, Writing Evaluation, Scoring
Correnti, Richard; Matsumura, Lindsay Clare; Wang, Elaine; Litman, Diane; Rahimi, Zahra; Kisa, Zahid – Reading Research Quarterly, 2020
Despite the importance of analytic text-based writing, relatively little is known about how to teach to this important skill. A persistent barrier to conducting research that would provide insight on best practices for teaching this form of writing is a lack of outcome measures that assess students' analytic text-based writing development and that…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Scoring
Choi, Ikkyu; Hao, Jiangang; Deane, Paul; Zhang, Mo – ETS Research Report Series, 2021
"Biometrics" are physical or behavioral human characteristics that can be used to identify a person. It is widely known that keystroke or typing dynamics for short, fixed texts (e.g., passwords) could serve as a behavioral biometric. In this study, we investigate whether keystroke data from essay responses can lead to a reliable…
Descriptors: Accuracy, High Stakes Tests, Writing Tests, Benchmarking
Wilson, Joshua; Chen, Dandan; Sandbank, Micheal P.; Hebert, Michael – Journal of Educational Psychology, 2019
The present study examined issues pertaining to the reliability of writing assessment in the elementary grades, and among samples of struggling and nonstruggling writers. The present study also extended nascent research on the reliability and the practical applications of automated essay scoring (AES) systems in Response to Intervention frameworks…
Descriptors: Computer Assisted Testing, Automation, Scores, Writing Tests
Nazerian, Samaneh; Abbasian, Gholam-Reza; Mohseni, Ahmad – Cogent Education, 2021
Despite growing interest in the studies on Zone of Proximal Development (ZPD), its operation in the forms of individualized and group-wide has been controversial. To cast some empirical light on the issue, this study was designed to study the applicability of the two scenarios of ZPD-based instructions to the writing accuracy of two levels of…
Descriptors: Sociocultural Patterns, Second Language Learning, Second Language Instruction, English (Second Language)
Zimmerman, Whitney Alicia; Kang, Hyun Bin; Kim, Kyung; Gao, Mengzhao; Johnson, Glenn; Clariana, Roy; Zhang, Fan – Journal of Statistics Education, 2018
Over two semesters short essay prompts were developed for use with the Graphical Interface for Knowledge Structure (GIKS), an automated essay scoring system. Participants were students in an undergraduate-level online introductory statistics course. The GIKS compares students' writing samples with an expert's to produce keyword occurrence and…
Descriptors: Undergraduate Students, Introductory Courses, Statistics, Computer Assisted Testing
Breyer, F. Jay; Rupp, André A.; Bridgeman, Brent – ETS Research Report Series, 2017
In this research report, we present an empirical argument for the use of a contributory scoring approach for the 2-essay writing assessment of the analytical writing section of the "GRE"® test in which human and machine scores are combined for score creation at the task and section levels. The approach was designed to replace a currently…
Descriptors: College Entrance Examinations, Scoring, Essay Tests, Writing Evaluation
Previous Page | Next Page »
Pages: 1 | 2