Publication Date
In 2025 | 0 |
Since 2024 | 3 |
Since 2021 (last 5 years) | 14 |
Since 2016 (last 10 years) | 23 |
Since 2006 (last 20 years) | 25 |
Descriptor
Source
Author
Burstein, Jill | 3 |
Zhang, Mo | 3 |
Beigman Klebanov, Beata | 2 |
Crossley, Scott A. | 2 |
Daniel F. McCaffrey | 2 |
Jessica Nastal | 2 |
Jill Burstein | 2 |
Lynette Hazelton | 2 |
McNamara, Danielle S. | 2 |
Norbert Elliot | 2 |
Roscoe, Rod D. | 2 |
More ▼ |
Publication Type
Reports - Research | 18 |
Journal Articles | 17 |
Speeches/Meeting Papers | 4 |
Reports - Descriptive | 3 |
Information Analyses | 2 |
Reports - Evaluative | 2 |
Books | 1 |
Collected Works - General | 1 |
Education Level
Higher Education | 6 |
Elementary Education | 5 |
Postsecondary Education | 5 |
Secondary Education | 4 |
Adult Education | 3 |
Early Childhood Education | 2 |
Grade 4 | 2 |
Grade 5 | 2 |
Grade 6 | 2 |
Intermediate Grades | 2 |
Middle Schools | 2 |
More ▼ |
Audience
Location
Illinois | 2 |
Pennsylvania | 2 |
Arizona (Phoenix) | 1 |
California (Long Beach) | 1 |
Hong Kong | 1 |
Indonesia | 1 |
Taiwan | 1 |
Texas | 1 |
Wisconsin (Madison) | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Hosnia M. M. Ahmed; Shaymaa E. Sorour – Education and Information Technologies, 2024
Evaluating the quality of university exam papers is crucial for universities seeking institutional and program accreditation. Currently, exam papers are assessed manually, a process that can be tedious, lengthy, and in some cases, inconsistent. This is often due to the focus on assessing only the formal specifications of exam papers. This study…
Descriptors: Higher Education, Artificial Intelligence, Writing Evaluation, Natural Language Processing
Huawei, Shi; Aryadoust, Vahid – Education and Information Technologies, 2023
Automated writing evaluation (AWE) systems are developed based on interdisciplinary research and technological advances such as natural language processing, computer sciences, and latent semantic analysis. Despite a steady increase in research publications in this area, the results of AWE investigations are often mixed, and their validity may be…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Automation
Anson, Chris M. – Composition Studies, 2022
Student plagiarism has challenged educators for decades, with heightened paranoia following the advent of the Internet in the 1980's and ready access to easily copied text. But plagiarism will look like child's play next to new developments in AI-based natural-language processing (NLP) systems that increasingly appear to "write" as…
Descriptors: Plagiarism, Artificial Intelligence, Natural Language Processing, Writing Assignments
Paul Deane; Duanli Yan; Katherine Castellano; Yigal Attali; Michelle Lamar; Mo Zhang; Ian Blood; James V. Bruno; Chen Li; Wenju Cui; Chunyi Ruan; Colleen Appel; Kofi James; Rodolfo Long; Farah Qureshi – ETS Research Report Series, 2024
This paper presents a multidimensional model of variation in writing quality, register, and genre in student essays, trained and tested via confirmatory factor analysis of 1.37 million essay submissions to ETS' digital writing service, Criterion®. The model was also validated with several other corpora, which indicated that it provides a…
Descriptors: Writing (Composition), Essays, Models, Elementary School Students
David W. Brown; Dean Jensen – International Society for Technology, Education, and Science, 2023
The growth of Artificial Intelligence (AI) chatbots has created a great deal of discussion in the education community. While many have gravitated towards the ability of these bots to make learning more interactive, others have grave concerns that student created essays, long used as a means of assessing the subject comprehension of students, may…
Descriptors: Artificial Intelligence, Natural Language Processing, Computer Software, Writing (Composition)
McCaffrey, Daniel F.; Zhang, Mo; Burstein, Jill – Grantee Submission, 2022
Background: This exploratory writing analytics study uses argumentative writing samples from two performance contexts--standardized writing assessments and university English course writing assignments--to compare: (1) linguistic features in argumentative writing; and (2) relationships between linguistic characteristics and academic performance…
Descriptors: Persuasive Discourse, Academic Language, Writing (Composition), Academic Achievement
Wan, Qian; Crossley, Scott; Allen, Laura; McNamara, Danielle – Grantee Submission, 2020
In this paper, we extracted content-based and structure-based features of text to predict human annotations for claims and nonclaims in argumentative essays. We compared Logistic Regression, Bernoulli Naive Bayes, Gaussian Naive Bayes, Linear Support Vector Classification, Random Forest, and Neural Networks to train classification models. Random…
Descriptors: Persuasive Discourse, Essays, Writing Evaluation, Natural Language Processing
Crossley, Scott A.; Kim, Minkyung; Allen, Laura K.; McNamara, Danielle S. – Grantee Submission, 2019
Summarization is an effective strategy to promote and enhance learning and deep comprehension of texts. However, summarization is seldom implemented by teachers in classrooms because the manual evaluation of students' summaries requires time and effort. This problem has led to the development of automated models of summarization quality. However,…
Descriptors: Automation, Writing Evaluation, Natural Language Processing, Artificial Intelligence
Hong Jiao, Editor; Robert W. Lissitz, Editor – IAP - Information Age Publishing, Inc., 2024
With the exponential increase of digital assessment, different types of data in addition to item responses become available in the measurement process. One of the salient features in digital assessment is that process data can be easily collected. This non-conventional structured or unstructured data source may bring new perspectives to better…
Descriptors: Artificial Intelligence, Natural Language Processing, Psychometrics, Computer Assisted Testing
Chen, Dandan; Hebert, Michael; Wilson, Joshua – American Educational Research Journal, 2022
We used multivariate generalizability theory to examine the reliability of hand-scoring and automated essay scoring (AES) and to identify how these scoring methods could be used in conjunction to optimize writing assessment. Students (n = 113) included subsamples of struggling writers and non-struggling writers in Grades 3-5 drawn from a larger…
Descriptors: Reliability, Scoring, Essays, Automation
Lynette Hazelton; Jessica Nastal; Norbert Elliot; Jill Burstein; Daniel F. McCaffrey – Journal of Response to Writing, 2021
In writing studies research, automated writing evaluation technology is typically examined for a specific, often narrow purpose: to evaluate a particular writing improvement measure, to mine data for changes in writing performance, or to demonstrate the effectiveness of a single technology and accompanying validity arguments. This article adopts a…
Descriptors: Formative Evaluation, Writing Evaluation, Automation, Natural Language Processing
Mozer, Reagan; Miratrixy, Luke; Relyea, Jackie Eunjung; Kim, James S. – Annenberg Institute for School Reform at Brown University, 2021
In a randomized trial that collects text as an outcome, traditional approaches for assessing treatment impact require that each document first be manually coded for constructs of interest by human raters. An impact analysis can then be conducted to compare treatment and control groups, using the hand-coded scores as a measured outcome. This…
Descriptors: Scoring, Automation, Data Analysis, Natural Language Processing
Zhang, Mo; Sinharay, Sandip – International Journal of Testing, 2022
This article demonstrates how recent advances in technology allow fine-grained analyses of candidate-produced essays, thus providing a deeper insight on writing performance. We examined how essay features, automatically extracted using natural language processing and keystroke logging techniques, can predict various performance measures using data…
Descriptors: At Risk Persons, Writing Achievement, Educational Technology, Writing Improvement
Lynette Hazelton; Jessica Nastal; Norbert Elliot; Jill Burstein; Daniel F. McCaffrey – Grantee Submission, 2021
In writing studies research, automated writing evaluation technology is typically examined for a specific, often narrow purpose: to evaluate a particular writing improvement measure, to mine data for changes in writing performance, or to demonstrate the effectiveness of a single technology and accompanying validity arguments. This article adopts a…
Descriptors: Formative Evaluation, Writing Evaluation, Automation, Natural Language Processing
Miranty, Delsa; Widiati, Utami – Pegem Journal of Education and Instruction, 2021
Automated Writing Evaluation (AWE) has been considered a potential pedagogical technique that exploits technology to assist the students' writing. However, little attention has been devoted to examining students' perceptions of Grammarly use in higher education context. This paper aims to obtain information regarding the writing process and the…
Descriptors: Foreign Countries, Technology Uses in Education, Writing (Composition), Student Attitudes
Previous Page | Next Page »
Pages: 1 | 2