Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 4 |
Since 2016 (last 10 years) | 5 |
Since 2006 (last 20 years) | 16 |
Descriptor
Computer Software | 18 |
Evaluation Methods | 18 |
Essays | 13 |
Writing Evaluation | 12 |
Computer Assisted Testing | 10 |
Foreign Countries | 9 |
Scoring | 7 |
Comparative Analysis | 6 |
Educational Technology | 6 |
English (Second Language) | 6 |
Essay Tests | 6 |
More ▼ |
Source
Author
Clariana, Roy B. | 2 |
Bilbro, J. | 1 |
Bridgeman, Brent | 1 |
Burk, John | 1 |
Clark, D. E. | 1 |
Coniam, David | 1 |
Davey, Tim | 1 |
Franklin, Scott V. | 1 |
Godshalk, Veronica M. | 1 |
Hafiz Tayyab Rauf | 1 |
Hermsen, Lisa M. | 1 |
More ▼ |
Publication Type
Journal Articles | 17 |
Reports - Research | 8 |
Reports - Evaluative | 7 |
Reports - Descriptive | 2 |
Collected Works - Proceedings | 1 |
Tests/Questionnaires | 1 |
Education Level
Audience
Location
Egypt | 1 |
Finland | 1 |
France | 1 |
Germany | 1 |
Hong Kong | 1 |
Qatar | 1 |
Taiwan | 1 |
Texas | 1 |
Turkey | 1 |
United Kingdom | 1 |
United Kingdom (Scotland) | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 2 |
ACT Assessment | 1 |
National Assessment of… | 1 |
Program for International… | 1 |
SAT (College Admission Test) | 1 |
What Works Clearinghouse Rating
Tahereh Firoozi; Okan Bulut; Mark J. Gierl – International Journal of Assessment Tools in Education, 2023
The proliferation of large language models represents a paradigm shift in the landscape of automated essay scoring (AES) systems, fundamentally elevating their accuracy and efficacy. This study presents an extensive examination of large language models, with a particular emphasis on the transformative influence of transformer-based models, such as…
Descriptors: Turkish, Writing Evaluation, Essays, Accuracy
Reagan Mozer; Luke Miratrix; Jackie Eunjung Relyea; James S. Kim – Journal of Educational and Behavioral Statistics, 2024
In a randomized trial that collects text as an outcome, traditional approaches for assessing treatment impact require that each document first be manually coded for constructs of interest by human raters. An impact analysis can then be conducted to compare treatment and control groups, using the hand-coded scores as a measured outcome. This…
Descriptors: Scoring, Evaluation Methods, Writing Evaluation, Comparative Analysis
Husam M. Alawadh; Talha Meraj; Lama Aldosari; Hafiz Tayyab Rauf – SAGE Open, 2024
E-learning systems are transforming the educational sector and making education more affordable and accessible. Recently, many e-learning systems have been equipped with advanced technologies that facilitate the roles of educators and increase the efficiency of teaching and learning. One such technology is Automatic Essay Grading (AEG) or…
Descriptors: Essays, Writing Evaluation, Computer Software, Technology Uses in Education
Waer, Hanan – Innovation in Language Learning and Teaching, 2023
Recent years have witnessed an increased interest in automated writing evaluation (hereafter AWE). However, few studies have examined the use of AWE with apprehensive writers. Hence, this study extends research in this area, investigating the effect of using AWE on reducing writing apprehension and enhancing grammatical knowledge. The participants…
Descriptors: Writing Evaluation, Writing Apprehension, English (Second Language), Second Language Learning
Seifried, Eva; Lenhard, Wolfgang; Spinath, Birgit – Journal of Educational Computing Research, 2017
Writing essays and receiving feedback can be useful for fostering students' learning and motivation. When faced with large class sizes, it is desirable to identify students who might particularly benefit from feedback. In this article, we tested the potential of Latent Semantic Analysis (LSA) for identifying poor essays. A total of 14 teaching…
Descriptors: Computer Assisted Testing, Computer Software, Essays, Writing Evaluation
Franklin, Scott V.; Hermsen, Lisa M. – Physical Review Special Topics - Physics Education Research, 2014
We present a new approach to investigating student reasoning while writing: real-time capture of the dynamics of the writing process. Key-capture or video software is used to record the entire writing episode, including all pauses, deletions, insertions, and revisions. A succinct shorthand, "S notation," is used to highlight significant…
Descriptors: Writing Across the Curriculum, Writing Processes, Abstract Reasoning, Writing Evaluation
Bilbro, J.; Iluzada, C.; Clark, D. E. – Journal on Excellence in College Teaching, 2013
The authors compared student perceptions of audio and written feedback in order to assess what types of students may benefit from receiving audio feedback on their essays rather than written feedback. Many instructors previously have reported the advantages they see in audio feedback, but little quantitative research has been done on how the…
Descriptors: Higher Education, Essays, Writing (Composition), Writing Evaluation
Clariana, Roy B.; Wallace, Patricia E.; Godshalk, Veronica M. – Educational Technology Research and Development, 2009
Essays are an important measure of complex learning, but pronouns can confound an author's intended meaning for both readers and text analysis software. This descriptive investigation considers the effect of pronouns on a computer-based text analysis approach, "ALA-Reader," which uses students' essays as the data source for deriving individual and…
Descriptors: Sentences, Cognitive Structures, Essays, Content Analysis
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M.; Davey, Tim; Bridgeman, Brent – ETS Research Report Series, 2012
Scoring models for the "e-rater"® system were built and evaluated for the "TOEFL"® exam's independent and integrated writing prompts. Prompt-specific and generic scoring models were built, and evaluation statistics, such as weighted kappas, Pearson correlations, standardized differences in mean scores, and correlations with…
Descriptors: Scoring, Prompting, Evaluators, Computer Software
Mogey, Nora; Paterson, Jessie; Burk, John; Purcell, Michael – ALT-J: Research in Learning Technology, 2010
Students at the University of Edinburgh do almost all their work on computers, but at the end of the semester they are examined by handwritten essays. Intuitively it would be appealing to allow students the choice of handwriting or typing, but this raises a concern that perhaps this might not be "fair"--that the choice a student makes,…
Descriptors: Handwriting, Essay Tests, Interrater Reliability, Grading
Coniam, David – ReCALL, 2009
This paper describes a study of the computer essay-scoring program BETSY. While the use of computers in rating written scripts has been criticised in some quarters for lacking transparency or lack of fit with how human raters rate written scripts, a number of essay rating programs are available commercially, many of which claim to offer comparable…
Descriptors: Writing Tests, Scoring, Foreign Countries, Interrater Reliability
Lai, Yi-hsiu – British Journal of Educational Technology, 2010
The purpose of this study was to investigate problems and potentials of new technologies in English writing education. The effectiveness of automated writing evaluation (AWE) ("MY Access") and of peer evaluation (PE) was compared. Twenty-two English as a foreign language (EFL) learners in Taiwan participated in this study. They submitted…
Descriptors: Feedback (Response), Writing Evaluation, Peer Evaluation, Grading
McPherson, Douglas – Interactive Technology and Smart Education, 2009
Purpose: The purpose of this paper is to describe how and why Texas A&M University at Qatar (TAMUQ) has developed a system aiming to effectively place students in freshman and developmental English programs. The placement system includes: triangulating data from external test scores, with scores from a panel-marked hand-written essay (HWE),…
Descriptors: Student Placement, Educational Testing, English (Second Language), Second Language Instruction
Johnson, Martin; Nadas, Rita – Learning, Media and Technology, 2009
Within large scale educational assessment agencies in the UK, there has been a shift towards assessors marking digitally scanned copies rather than the original paper scripts that were traditionally used. This project uses extended essay examination scripts to consider whether the mode in which an essay is read potentially influences the…
Descriptors: Reading Comprehension, Educational Assessment, Internet, Essay Tests

Page, Ellis Batten – Journal of Experimental Education, 1994
National Assessment of Educational Progress writing sample essays from 1988 and 1990 (495 and 599 essays) were subjected to computerized grading and human ratings. Cross-validation suggests that computer scoring is superior to a two-judge panel, a finding encouraging for large programs of essay evaluation. (SLD)
Descriptors: Computer Assisted Testing, Computer Software, Essays, Evaluation Methods
Previous Page | Next Page »
Pages: 1 | 2