Publication Date
In 2025 | 1 |
Since 2024 | 4 |
Since 2021 (last 5 years) | 6 |
Since 2016 (last 10 years) | 9 |
Descriptor
Source
Author
Alessandra Zappoli | 1 |
Alessio Palmero Aprosio | 1 |
Alexander F. Tang | 1 |
Allen, Laura K. | 1 |
Aryadoust, Vahid | 1 |
Buckingham Shum, Simon | 1 |
Dan Song | 1 |
Goh, Tiong-Thye | 1 |
Han, Turgay | 1 |
Huawei, Shi | 1 |
Knight, Simon | 1 |
More ▼ |
Publication Type
Journal Articles | 7 |
Reports - Research | 7 |
Information Analyses | 2 |
Dissertations/Theses -… | 1 |
Speeches/Meeting Papers | 1 |
Tests/Questionnaires | 1 |
Education Level
Audience
Laws, Policies, & Programs
Assessments and Surveys
Dale Chall Readability Formula | 1 |
Flesch Kincaid Grade Level… | 1 |
Flesch Reading Ease Formula | 1 |
International English… | 1 |
SAT (College Admission Test) | 1 |
What Works Clearinghouse Rating
Huawei, Shi; Aryadoust, Vahid – Education and Information Technologies, 2023
Automated writing evaluation (AWE) systems are developed based on interdisciplinary research and technological advances such as natural language processing, computer sciences, and latent semantic analysis. Despite a steady increase in research publications in this area, the results of AWE investigations are often mixed, and their validity may be…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Automation
Dan Song; Alexander F. Tang – Language Learning & Technology, 2025
While many studies have addressed the benefits of technology-assisted L2 writing, limited research has delved into how generative artificial intelligence (GAI) supports students in completing their writing tasks in Mandarin Chinese. In this study, 26 university-level Mandarin Chinese foreign language students completed two writing tasks on two…
Descriptors: Artificial Intelligence, Second Language Learning, Standardized Tests, Writing Tests
Osama Koraishi – Language Teaching Research Quarterly, 2024
This study conducts a comprehensive quantitative evaluation of OpenAI's language model, ChatGPT 4, for grading Task 2 writing of the IELTS exam. The objective is to assess the alignment between ChatGPT's grading and that of official human raters. The analysis encompassed a multifaceted approach, including a comparison of means and reliability…
Descriptors: Second Language Learning, English (Second Language), Language Tests, Artificial Intelligence
Alessandra Zappoli; Alessio Palmero Aprosio; Sara Tonelli – Written Communication, 2024
In this work, we explore the use of digital technologies and statistical analysis to monitor how Italian secondary school students' writing changes over time and how comparisons can be made across different high school types. We analyzed more than 2,000 exam essays written by Italian high school students over 13 years and in five different school…
Descriptors: Essays, Writing (Composition), Foreign Countries, High School Students
Yi Gui – ProQuest LLC, 2024
This study explores using transfer learning in machine learning for natural language processing (NLP) to create generic automated essay scoring (AES) models, providing instant online scoring for statewide writing assessments in K-12 education. The goal is to develop an instant online scorer that is generalizable to any prompt, addressing the…
Descriptors: Writing Tests, Natural Language Processing, Writing Evaluation, Scoring
Sari, Elif; Han, Turgay – Reading Matrix: An International Online Journal, 2021
Providing both effective feedback applications and reliable assessment practices are two central issues in ESL/EFL writing instruction contexts. Giving individual feedback is very difficult in crowded classes as it requires a great amount of time and effort for instructors. Moreover, instructors likely employ inconsistent assessment procedures,…
Descriptors: Automation, Writing Evaluation, Artificial Intelligence, Natural Language Processing
Knight, Simon; Buckingham Shum, Simon; Ryan, Philippa; Sándor, Ágnes; Wang, Xiaolong – International Journal of Artificial Intelligence in Education, 2018
Research into the teaching and assessment of student writing shows that many students find academic writing a challenge to learn, with legal writing no exception. Improving the availability and quality of timely formative feedback is an important aim. However, the time-consuming nature of assessing writing makes it impractical for instructors to…
Descriptors: Writing Evaluation, Natural Language Processing, Legal Education (Professions), Undergraduate Students
Allen, Laura K.; Likens, Aaron D.; McNamara, Danielle S. – Grantee Submission, 2018
The assessment of writing proficiency generally includes analyses of the specific linguistic and rhetorical features contained in the singular essays produced by students. However, researchers have recently proposed that an individual's ability to flexibly adapt the linguistic properties of their writing might more closely capture writing skill.…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Writing Skills
Goh, Tiong-Thye; Sun, Hui; Yang, Bing – Computer Assisted Language Learning, 2020
This study investigates the extent to which microfeatures -- such as basic text features, readability, cohesion, and lexical diversity based on specific word lists -- affect Chinese EFL writing quality. Data analysis was conducted using natural language processing, correlation analysis and stepwise multiple regression analysis on a corpus of 268…
Descriptors: Essays, Writing Tests, English (Second Language), Second Language Learning