Publication Date
In 2025 | 4 |
Since 2024 | 10 |
Since 2021 (last 5 years) | 48 |
Since 2016 (last 10 years) | 96 |
Since 2006 (last 20 years) | 159 |
Descriptor
Computer Assisted Testing | 188 |
Writing Evaluation | 188 |
Essays | 75 |
Scoring | 75 |
Writing Tests | 57 |
English (Second Language) | 52 |
Second Language Learning | 49 |
Foreign Countries | 45 |
Writing Skills | 44 |
Scores | 38 |
Writing Instruction | 38 |
More ▼ |
Source
Author
Wilson, Joshua | 7 |
Attali, Yigal | 5 |
Burstein, Jill | 4 |
Deane, Paul | 4 |
McNamara, Danielle S. | 4 |
Ramineni, Chaitanya | 4 |
Bridgeman, Brent | 3 |
Crossley, Scott A. | 3 |
Lee, Yong-Won | 3 |
Sterett H. Mercer | 3 |
Williamson, David M. | 3 |
More ▼ |
Publication Type
Education Level
Audience
Researchers | 3 |
Practitioners | 2 |
Teachers | 2 |
Administrators | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Does not meet standards | 1 |
Jessie S. Barrot – Education and Information Technologies, 2024
This bibliometric analysis attempts to map out the scientific literature on automated writing evaluation (AWE) systems for teaching, learning, and assessment. A total of 170 documents published between 2002 and 2021 in Social Sciences Citation Index journals were reviewed from four dimensions, namely size (productivity and citations), time…
Descriptors: Educational Trends, Automation, Computer Assisted Testing, Writing Tests
Huawei, Shi; Aryadoust, Vahid – Education and Information Technologies, 2023
Automated writing evaluation (AWE) systems are developed based on interdisciplinary research and technological advances such as natural language processing, computer sciences, and latent semantic analysis. Despite a steady increase in research publications in this area, the results of AWE investigations are often mixed, and their validity may be…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Automation
Xiong, Jiawei; Li, Feiming – Educational Measurement: Issues and Practice, 2023
Multidimensional scoring evaluates each constructed-response answer from more than one rating dimension and/or trait such as lexicon, organization, and supporting ideas instead of only one holistic score, to help students distinguish between various dimensions of writing quality. In this work, we present a bilevel learning model for combining two…
Descriptors: Scoring, Models, Task Analysis, Learning Processes
Dhini, Bachriah Fatwa; Girsang, Abba Suganda; Sufandi, Unggul Utan; Kurniawati, Heny – Asian Association of Open Universities Journal, 2023
Purpose: The authors constructed an automatic essay scoring (AES) model in a discussion forum where the result was compared with scores given by human evaluators. This research proposes essay scoring, which is conducted through two parameters, semantic and keyword similarities, using a SentenceTransformers pre-trained model that can construct the…
Descriptors: Computer Assisted Testing, Scoring, Writing Evaluation, Essays
Doewes, Afrizal; Kurdhi, Nughthoh Arfawi; Saxena, Akrati – International Educational Data Mining Society, 2023
Automated Essay Scoring (AES) tools aim to improve the efficiency and consistency of essay scoring by using machine learning algorithms. In the existing research work on this topic, most researchers agree that human-automated score agreement remains the benchmark for assessing the accuracy of machine-generated scores. To measure the performance of…
Descriptors: Essays, Writing Evaluation, Evaluators, Accuracy
Yang Jiang; Mo Zhang; Jiangang Hao; Paul Deane; Chen Li – Journal of Educational Measurement, 2024
The emergence of sophisticated AI tools such as ChatGPT, coupled with the transition to remote delivery of educational assessments in the COVID-19 era, has led to increasing concerns about academic integrity and test security. Using AI tools, test takers can produce high-quality texts effortlessly and use them to game assessments. It is thus…
Descriptors: Integrity, Artificial Intelligence, Technology Uses in Education, Ethics
Yue Huang; Joshua Wilson – Journal of Computer Assisted Learning, 2025
Background: Automated writing evaluation (AWE) systems, used as formative assessment tools in writing classrooms, are promising for enhancing instruction and improving student performance. Although meta-analytic evidence supports AWE's effectiveness in various contexts, research on its effectiveness in the U.S. K-12 setting has lagged behind its…
Descriptors: Writing Evaluation, Writing Skills, Writing Tests, Writing Instruction
Almusharraf, Norah; Alotaibi, Hind – Technology, Knowledge and Learning, 2023
Evaluating written texts is believed to be a time-consuming process that can lack consistency and objectivity. Automated essay scoring (AES) can provide solutions to some of the limitations of human scoring. This research aimed to evaluate the performance of one AES system, Grammarly, in comparison to human raters. Both approaches' performances…
Descriptors: Writing Evaluation, Writing Tests, Essay Tests, Essays
Shujun Liu; Azzeddine Boudouaia; Xinya Chen; Yan Li – Asia-Pacific Education Researcher, 2025
The application of Automated Writing Evaluation (AWE) has recently gained researchers' attention worldwide. However, the impact of AWE feedback on student writing, particularly in languages other than English, remains controversial. This study aimed to compare the impacts of Chinese AWE feedback and teacher feedback on Chinese writing revision,…
Descriptors: Foreign Countries, Middle School Students, Grade 7, Writing Evaluation
On-Soon Lee – Journal of Pan-Pacific Association of Applied Linguistics, 2024
Despite the increasing interest in using AI tools as assistant agents in instructional settings, the effectiveness of ChatGPT, the generative pretrained AI, for evaluating the accuracy of second language (L2) writing has been largely unexplored in formative assessment. Therefore, the current study aims to examine how ChatGPT, as an evaluator,…
Descriptors: Foreign Countries, Undergraduate Students, English (Second Language), Second Language Learning
Saha, Sujan Kumar – Smart Learning Environments, 2021
In this paper, we present a system for automatic evaluation of the quality of a question paper. Question paper plays a major role in educational assessment. The quality of a question paper is crucial to fulfilling the purpose of the assessment. In many education sectors, question papers are prepared manually. A prior analysis of a question paper…
Descriptors: Foreign Countries, Graduate Students, Writing Assignments, Writing Evaluation
Li, Xu; Ouyang, Fan; Liu, Jianwen; Wei, Chengkun; Chen, Wenzhi – Journal of Educational Computing Research, 2023
The computer-supported writing assessment (CSWA) has been widely used to reduce instructor workload and provide real-time feedback. Interpretability of CSWA draws extensive attention because it can benefit the validity, transparency, and knowledge-aware feedback of academic writing assessments. This study proposes a novel assessment tool,…
Descriptors: Computer Assisted Testing, Writing Evaluation, Feedback (Response), Natural Language Processing
Uto, Masaki; Okano, Masashi – IEEE Transactions on Learning Technologies, 2021
In automated essay scoring (AES), scores are automatically assigned to essays as an alternative to grading by humans. Traditional AES typically relies on handcrafted features, whereas recent studies have proposed AES models based on deep neural networks to obviate the need for feature engineering. Those AES models generally require training on a…
Descriptors: Essays, Scoring, Writing Evaluation, Item Response Theory
Yishen Song; Qianta Zhu; Huaibo Wang; Qinhua Zheng – IEEE Transactions on Learning Technologies, 2024
Manually scoring and revising student essays has long been a time-consuming task for educators. With the rise of natural language processing techniques, automated essay scoring (AES) and automated essay revising (AER) have emerged to alleviate this burden. However, current AES and AER models require large amounts of training data and lack…
Descriptors: Scoring, Essays, Writing Evaluation, Computer Software
Nikolic, Sasha; Daniel, Scott; Haque, Rezwanul; Belkina, Marina; Hassan, Ghulam M.; Grundy, Sarah; Lyden, Sarah; Neal, Peter; Sandison, Caz – European Journal of Engineering Education, 2023
ChatGPT, a sophisticated online chatbot, sent shockwaves through many sectors once reports filtered through that it could pass exams. In higher education, it has raised many questions about the authenticity of assessment and challenges in detecting plagiarism. Amongst the resulting frenetic hubbub, hints of potential opportunities in how ChatGPT…
Descriptors: Artificial Intelligence, Performance Based Assessment, Engineering Education, Integrity