Publication Date
In 2025 | 1 |
Since 2024 | 4 |
Since 2021 (last 5 years) | 12 |
Since 2016 (last 10 years) | 18 |
Since 2006 (last 20 years) | 22 |
Descriptor
Computational Linguistics | 22 |
Scoring | 22 |
Writing Evaluation | 22 |
Essays | 18 |
Accuracy | 9 |
Computer Assisted Testing | 9 |
Comparative Analysis | 8 |
Natural Language Processing | 8 |
English (Second Language) | 7 |
Second Language Learning | 7 |
Computer Software | 6 |
More ▼ |
Source
Author
Crossley, Scott A. | 4 |
McNamara, Danielle S. | 4 |
Allen, Laura K. | 3 |
Kyle, Kristopher | 3 |
Ballier, Nicolas | 1 |
Bouyé, Manon | 1 |
Bracey, Zoë Buck | 1 |
Carla Wood | 1 |
Christopher Schatschneider | 1 |
Combs, Trenton | 1 |
Dadi Ramesh | 1 |
More ▼ |
Publication Type
Reports - Research | 20 |
Journal Articles | 17 |
Speeches/Meeting Papers | 3 |
Dissertations/Theses -… | 1 |
Information Analyses | 1 |
Tests/Questionnaires | 1 |
Education Level
Audience
Location
Arizona | 1 |
Massachusetts (Boston) | 1 |
Taiwan | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 2 |
Gates MacGinitie Reading Tests | 1 |
What Works Clearinghouse Rating
Dadi Ramesh; Suresh Kumar Sanampudi – European Journal of Education, 2024
Automatic essay scoring (AES) is an essential educational application in natural language processing. This automated process will alleviate the burden by increasing the reliability and consistency of the assessment. With the advances in text embedding libraries and neural network models, AES systems achieved good results in terms of accuracy.…
Descriptors: Scoring, Essays, Writing Evaluation, Memory
Wang, Jue; Engelhard, George; Combs, Trenton – Journal of Experimental Education, 2023
Unfolding models are frequently used to develop scales for measuring attitudes. Recently, unfolding models have been applied to examine rater severity and accuracy within the context of rater-mediated assessments. One of the problems in applying unfolding models to rater-mediated assessments is that the substantive interpretations of the latent…
Descriptors: Writing Evaluation, Scoring, Accuracy, Computational Linguistics
Wang, Heqiao; Troia, Gary A. – Written Communication, 2023
The primary purpose of this study is to investigate the degree to which register knowledge, register-specific motivation, and diverse linguistic features are predictive of human judgment of writing quality in three registers--narrative, informative, and opinion. The secondary purpose is to compare the evaluation metrics of register-partitioned…
Descriptors: Writing Evaluation, Essays, Elementary School Students, Grade 4
Yishen Song; Qianta Zhu; Huaibo Wang; Qinhua Zheng – IEEE Transactions on Learning Technologies, 2024
Manually scoring and revising student essays has long been a time-consuming task for educators. With the rise of natural language processing techniques, automated essay scoring (AES) and automated essay revising (AER) have emerged to alleviate this burden. However, current AES and AER models require large amounts of training data and lack…
Descriptors: Scoring, Essays, Writing Evaluation, Computer Software
Carla Wood; Miguel Garcia-Salas; Christopher Schatschneider – Grantee Submission, 2023
Purpose: The aim of this study was to advance the analysis of written language transcripts by validating an automated scoring procedure using an automated open-access tool for calculating morphological complexity (MC) from written transcripts. Method: The MC of words in 146 written responses of students in fifth grade was assessed using two…
Descriptors: Automation, Computer Assisted Testing, Scoring, Computation
Taichi Yamashita – Language Testing, 2025
With the rapid development of generative artificial intelligence (AI) frameworks (e.g., the generative pre-trained transformer [GPT]), a growing number of researchers have started to explore its potential as an automated essay scoring (AES) system. While previous studies have investigated the alignment between human ratings and GPT ratings, few…
Descriptors: Artificial Intelligence, English (Second Language), Second Language Learning, Second Language Instruction
Jiyeo Yun – English Teaching, 2023
Studies on automatic scoring systems in writing assessments have also evaluated the relationship between human and machine scores for the reliability of automated essay scoring systems. This study investigated the magnitudes of indices for inter-rater agreement and discrepancy, especially regarding human and machine scoring, in writing assessment.…
Descriptors: Meta Analysis, Interrater Reliability, Essays, Scoring
Doewes, Afrizal; Saxena, Akrati; Pei, Yulong; Pechenizkiy, Mykola – International Educational Data Mining Society, 2022
In Automated Essay Scoring (AES) systems, many previous works have studied group fairness using the demographic features of essay writers. However, individual fairness also plays an important role in fair evaluation and has not been yet explored. Initialized by Dwork et al., the fundamental concept of individual fairness is "similar people…
Descriptors: Scoring, Essays, Writing Evaluation, Comparative Analysis
Latifi, Syed; Gierl, Mark – Language Testing, 2021
An automated essay scoring (AES) program is a software system that uses techniques from corpus and computational linguistics and machine learning to grade essays. In this study, we aimed to describe and evaluate particular language features of Coh-Metrix for a novel AES program that would score junior and senior high school students' essays from…
Descriptors: Writing Evaluation, Computer Assisted Testing, Scoring, Essays
Yi Gui – ProQuest LLC, 2024
This study explores using transfer learning in machine learning for natural language processing (NLP) to create generic automated essay scoring (AES) models, providing instant online scoring for statewide writing assessments in K-12 education. The goal is to develop an instant online scorer that is generalizable to any prompt, addressing the…
Descriptors: Writing Tests, Natural Language Processing, Writing Evaluation, Scoring
Messina, Cara Marta; Jones, Cherice Escobar; Poe, Mya – Written Communication, 2023
We report on a college-level study of student reflection and instructor prompts using scoring and corpus analysis methods. We collected 340 student reflections and 24 faculty prompts. Reflections were scored using trait and holistic scoring and then reflections and faculty prompts were analyzed using Natural Language Processing to identify…
Descriptors: Reflection, Writing Instruction, Computational Linguistics, Cues
Gaillat, Thomas; Simpkin, Andrew; Ballier, Nicolas; Stearns, Bernardo; Sousa, Annanda; Bouyé, Manon; Zarrouk, Manel – ReCALL, 2021
This paper focuses on automatically assessing language proficiency levels according to linguistic complexity in learner English. We implement a supervised learning approach as part of an automatic essay scoring system. The objective is to uncover Common European Framework of Reference for Languages (CEFR) criterial features in writings by learners…
Descriptors: Prediction, Rating Scales, English (Second Language), Second Language Learning
Monteiro, Kátia R.; Crossley, Scott A.; Kyle, Kristopher – Applied Linguistics, 2020
Lexical items that are encountered more frequently and in varying contexts have important effects on second language (L2) development because frequent and contextually diverse words are learned faster and become more entrenched in a learner's lexicon (Ellis 2002a, b). Despite evidence that L2 learners are generally exposed to non-native input,…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Benchmarking
Vajjala, Sowmya – International Journal of Artificial Intelligence in Education, 2018
Automatic essay scoring (AES) refers to the process of scoring free text responses to given prompts, considering human grader scores as the gold standard. Writing such essays is an essential component of many language and aptitude exams. Hence, AES became an active and established area of research, and there are many proprietary systems used in…
Descriptors: Computer Software, Essays, Writing Evaluation, Scoring
Allen, Laura K.; Likens, Aaron D.; McNamara, Danielle S. – Grantee Submission, 2017
The current study examined the degree to which the quality and characteristics of students' essays could be modeled through dynamic natural language processing analyses. Undergraduate students (n = 131) wrote timed, persuasive essays in response to an argumentative writing prompt. Recurrent patterns of the words in the essays were then analyzed…
Descriptors: Writing Evaluation, Essays, Persuasive Discourse, Natural Language Processing
Previous Page | Next Page »
Pages: 1 | 2