NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing all 13 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Dhini, Bachriah Fatwa; Girsang, Abba Suganda; Sufandi, Unggul Utan; Kurniawati, Heny – Asian Association of Open Universities Journal, 2023
Purpose: The authors constructed an automatic essay scoring (AES) model in a discussion forum where the result was compared with scores given by human evaluators. This research proposes essay scoring, which is conducted through two parameters, semantic and keyword similarities, using a SentenceTransformers pre-trained model that can construct the…
Descriptors: Computer Assisted Testing, Scoring, Writing Evaluation, Essays
Peer reviewed Peer reviewed
Direct linkDirect link
On-Soon Lee – Journal of Pan-Pacific Association of Applied Linguistics, 2024
Despite the increasing interest in using AI tools as assistant agents in instructional settings, the effectiveness of ChatGPT, the generative pretrained AI, for evaluating the accuracy of second language (L2) writing has been largely unexplored in formative assessment. Therefore, the current study aims to examine how ChatGPT, as an evaluator,…
Descriptors: Foreign Countries, Undergraduate Students, English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zhang, Haoran; Litman, Diane – Grantee Submission, 2020
While automated essay scoring (AES) can reliably grade essays at scale, automated writing evaluation (AWE) additionally provides formative feedback to guide essay revision. However, a neural AES typically does not provide useful feature representations for supporting AWE. This paper presents a method for linking AWE and neural AES, by extracting…
Descriptors: Computer Assisted Testing, Scoring, Essay Tests, Writing Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Passonneau, Rebecca J.; Poddar, Ananya; Gite, Gaurav; Krivokapic, Alisa; Yang, Qian; Perin, Dolores – International Journal of Artificial Intelligence in Education, 2018
Development of reliable rubrics for educational intervention studies that address reading and writing skills is labor-intensive, and could benefit from an automated approach. We compare a main ideas rubric used in a successful writing intervention study to a highly reliable wise-crowd content assessment method developed to evaluate…
Descriptors: Computer Assisted Testing, Writing Evaluation, Content Analysis, Scoring Rubrics
Peer reviewed Peer reviewed
Direct linkDirect link
Dixon-Román, Ezekiel; Nichols, T. Philip; Nyame-Mensah, Ama – Learning, Media and Technology, 2020
In this article, we examine the sociopolitical implications of AI technologies as they are integrated into writing instruction and assessment. Drawing from new materialist and Black feminist thought, we consider how learning analytics platforms for writing are animated by and through entanglements of algorithmic reasoning, state standards and…
Descriptors: Racial Bias, Artificial Intelligence, Educational Technology, Writing Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Perin, Dolores; Lauterbach, Mark – International Journal of Artificial Intelligence in Education, 2018
The problem of poor writing skills at the postsecondary level is a large and troubling one. This study investigated the writing skills of low-skilled adults attending college developmental education courses by determining whether variables from an automated scoring system were predictive of human scores on writing quality rubrics. The human-scored…
Descriptors: College Students, Writing Evaluation, Writing Skills, Developmental Studies Programs
Peer reviewed Peer reviewed
Direct linkDirect link
Knight, Simon; Buckingham Shum, Simon; Ryan, Philippa; Sándor, Ágnes; Wang, Xiaolong – International Journal of Artificial Intelligence in Education, 2018
Research into the teaching and assessment of student writing shows that many students find academic writing a challenge to learn, with legal writing no exception. Improving the availability and quality of timely formative feedback is an important aim. However, the time-consuming nature of assessing writing makes it impractical for instructors to…
Descriptors: Writing Evaluation, Natural Language Processing, Legal Education (Professions), Undergraduate Students
Peer reviewed Peer reviewed
Direct linkDirect link
Razi, Salim – SAGE Open, 2015
Similarity reports of plagiarism detectors should be approached with caution as they may not be sufficient to support allegations of plagiarism. This study developed a 50-item rubric to simplify and standardize evaluation of academic papers. In the spring semester of 2011-2012 academic year, 161 freshmen's papers at the English Language Teaching…
Descriptors: Foreign Countries, Scoring Rubrics, Writing Evaluation, Writing (Composition)
Peer reviewed Peer reviewed
Direct linkDirect link
Graham, Steve – Literacy Research and Instruction, 2014
In this response to Burdick et al. (2013), the author describes two possible and perhaps even common reactions to the article by Burdick et al. (2013). Advocates such as Way, Davis, and Strain- Seymour (2008) will likely applaud the development of the Writing Ability Developmental Scale and the possible widespread use of computer-based writing…
Descriptors: Writing Evaluation, Evaluation Methods, Evaluation Research, Alternative Assessment
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ma, Hong; Slater, Tammy – CALICO Journal, 2016
This study utilized a theory proposed by Mohan, Slater, Luo, and Jaipal (2002) regarding the Developmental Path of Cause to investigate AWE score use in classroom contexts. This "path" has the potential to support validity arguments because it suggests how causal linguistic features can be organized in hierarchical order. Utilization of…
Descriptors: Scores, Automation, Writing Evaluation, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Hoang, Giang Thi Linh; Kunnan, Antony John – Language Assessment Quarterly, 2016
Computer technology made its way into writing instruction and assessment with spelling and grammar checkers decades ago, but more recently it has done so with automated essay evaluation (AEE) and diagnostic feedback. And although many programs and tools have been developed in the last decade, not enough research has been conducted to support or…
Descriptors: Case Studies, Essays, Writing Evaluation, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Kelly, P. Adam – Journal of Educational Computing Research, 2005
Powers, Burstein, Chodorow, Fowles, and Kukich (2002) suggested that automated essay scoring (AES) may benefit from the use of "general" scoring models designed to score essays irrespective of the prompt for which an essay was written. They reasoned that such models may enhance score credibility by signifying that an AES system measures the same…
Descriptors: Essays, Models, Writing Evaluation, Validity
Ben-Simon, Anat; Bennett, Randy Elliott – Journal of Technology, Learning, and Assessment, 2007
This study evaluated a "substantively driven" method for scoring NAEP writing assessments automatically. The study used variations of an existing commercial program, e-rater[R], to compare the performance of three approaches to automated essay scoring: a "brute-empirical" approach in which variables are selected and weighted solely according to…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays