NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Does not meet standards1
Showing 1 to 15 of 79 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Almusharraf, Norah; Alotaibi, Hind – Technology, Knowledge and Learning, 2023
Evaluating written texts is believed to be a time-consuming process that can lack consistency and objectivity. Automated essay scoring (AES) can provide solutions to some of the limitations of human scoring. This research aimed to evaluate the performance of one AES system, Grammarly, in comparison to human raters. Both approaches' performances…
Descriptors: Writing Evaluation, Writing Tests, Essay Tests, Essays
Peer reviewed Peer reviewed
Direct linkDirect link
Chan, Kinnie Kin Yee; Bond, Trevor; Yan, Zi – Language Testing, 2023
We investigated the relationship between the scores assigned by an Automated Essay Scoring (AES) system, the Intelligent Essay Assessor (IEA), and grades allocated by trained, professional human raters to English essay writing by instigating two procedures novel to written-language assessment: the logistic transformation of AES raw scores into…
Descriptors: Computer Assisted Testing, Essays, Scoring, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Cleophas, Catherine; Hönnige, Christoph; Meisel, Frank; Meyer, Philipp – INFORMS Transactions on Education, 2023
As the COVID-19 pandemic motivated a shift to virtual teaching, exams have increasingly moved online too. Detecting cheating through collusion is not easy when tech-savvy students take online exams at home and on their own devices. Such online at-home exams may tempt students to collude and share materials and answers. However, online exams'…
Descriptors: Computer Assisted Testing, Cheating, Identification, Essay Tests
Santi Lestari – Research Matters, 2024
Despite the increasing ubiquity of computer-based tests, many general qualifications examinations remain in a paper-based mode. Insufficient and unequal digital provision across schools is often identified as a major barrier to a full adoption of computer-based exams for general qualifications. One way to overcome this barrier is a gradual…
Descriptors: Keyboarding (Data Entry), Handwriting, Test Format, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Wijanarko, Bambang Dwi; Heryadi, Yaya; Toba, Hapnes; Budiharto, Widodo – Education and Information Technologies, 2021
Automated question generation is a task to generate questions from structured or unstructured data. The increasing popularity of online learning in recent years has given momentum to automated question generation in education field for facilitating learning process, learning material retrieval, and computer-based testing. This paper report on the…
Descriptors: Foreign Countries, Undergraduate Students, Engineering Education, Computer Software
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zhang, Haoran; Litman, Diane – Grantee Submission, 2020
While automated essay scoring (AES) can reliably grade essays at scale, automated writing evaluation (AWE) additionally provides formative feedback to guide essay revision. However, a neural AES typically does not provide useful feature representations for supporting AWE. This paper presents a method for linking AWE and neural AES, by extracting…
Descriptors: Computer Assisted Testing, Scoring, Essay Tests, Writing Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Reinertsen, Nathanael – English in Australia, 2018
The difference in how humans read and how Automated Essay Scoring (AES) systems process written language leads to a situation where a portion of student responses will be comprehensible to human markers, but unable to be parsed by AES systems. This paper examines a number of pieces of student writing that were marked by trained human markers, but…
Descriptors: Qualitative Research, Writing Evaluation, Essay Tests, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sato, Takeshi; Ogura, Masa'aki; Aota, Shoma; Burden, Tyler – Research-publishing.net, 2018
This study examines the effectiveness of an automated translation chatbot used in online interactions which consequently could enhance second/foreign language (L2) competence. Based on the sociocultural perspectives of learning, such as communication to recognize the difference from others and to be involved in sense-making processes, this study…
Descriptors: Automation, Computer Mediated Communication, Educational Technology, Computer Assisted Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Wind, Stefanie A.; Wolfe, Edward W.; Engelhard, George, Jr.; Foltz, Peter; Rosenstein, Mark – International Journal of Testing, 2018
Automated essay scoring engines (AESEs) are becoming increasingly popular as an efficient method for performance assessments in writing, including many language assessments that are used worldwide. Before they can be used operationally, AESEs must be "trained" using machine-learning techniques that incorporate human ratings. However, the…
Descriptors: Computer Assisted Testing, Essay Tests, Writing Evaluation, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Correnti, Richard; Matsumura, Lindsay Clare; Wang, Elaine; Litman, Diane; Rahimi, Zahra; Kisa, Zahid – Reading Research Quarterly, 2020
Despite the importance of analytic text-based writing, relatively little is known about how to teach to this important skill. A persistent barrier to conducting research that would provide insight on best practices for teaching this form of writing is a lack of outcome measures that assess students' analytic text-based writing development and that…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Choi, Ikkyu; Hao, Jiangang; Deane, Paul; Zhang, Mo – ETS Research Report Series, 2021
"Biometrics" are physical or behavioral human characteristics that can be used to identify a person. It is widely known that keystroke or typing dynamics for short, fixed texts (e.g., passwords) could serve as a behavioral biometric. In this study, we investigate whether keystroke data from essay responses can lead to a reliable…
Descriptors: Accuracy, High Stakes Tests, Writing Tests, Benchmarking
Peer reviewed Peer reviewed
Direct linkDirect link
Wilson, Joshua; Chen, Dandan; Sandbank, Micheal P.; Hebert, Michael – Journal of Educational Psychology, 2019
The present study examined issues pertaining to the reliability of writing assessment in the elementary grades, and among samples of struggling and nonstruggling writers. The present study also extended nascent research on the reliability and the practical applications of automated essay scoring (AES) systems in Response to Intervention frameworks…
Descriptors: Computer Assisted Testing, Automation, Scores, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Nazerian, Samaneh; Abbasian, Gholam-Reza; Mohseni, Ahmad – Cogent Education, 2021
Despite growing interest in the studies on Zone of Proximal Development (ZPD), its operation in the forms of individualized and group-wide has been controversial. To cast some empirical light on the issue, this study was designed to study the applicability of the two scenarios of ZPD-based instructions to the writing accuracy of two levels of…
Descriptors: Sociocultural Patterns, Second Language Learning, Second Language Instruction, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Zimmerman, Whitney Alicia; Kang, Hyun Bin; Kim, Kyung; Gao, Mengzhao; Johnson, Glenn; Clariana, Roy; Zhang, Fan – Journal of Statistics Education, 2018
Over two semesters short essay prompts were developed for use with the Graphical Interface for Knowledge Structure (GIKS), an automated essay scoring system. Participants were students in an undergraduate-level online introductory statistics course. The GIKS compares students' writing samples with an expert's to produce keyword occurrence and…
Descriptors: Undergraduate Students, Introductory Courses, Statistics, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Breyer, F. Jay; Rupp, André A.; Bridgeman, Brent – ETS Research Report Series, 2017
In this research report, we present an empirical argument for the use of a contributory scoring approach for the 2-essay writing assessment of the analytical writing section of the "GRE"® test in which human and machine scores are combined for score creation at the task and section levels. The approach was designed to replace a currently…
Descriptors: College Entrance Examinations, Scoring, Essay Tests, Writing Evaluation
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6