Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 17 |
Since 2016 (last 10 years) | 22 |
Since 2006 (last 20 years) | 35 |
Descriptor
Automation | 37 |
Writing Tests | 37 |
Scoring | 29 |
Computer Assisted Testing | 20 |
Writing Evaluation | 20 |
Elementary School Students | 12 |
Essays | 12 |
Essay Tests | 11 |
Accuracy | 7 |
Validity | 7 |
Curriculum Based Assessment | 6 |
More ▼ |
Source
Author
Mercer, Sterett H. | 5 |
Keller-Margulis, Milena A. | 3 |
Matta, Michael | 3 |
Sterett H. Mercer | 3 |
Attali, Yigal | 2 |
Burstein, Jill | 2 |
Cannon, Joanna E. | 2 |
Deane, Paul | 2 |
Guo, Yue | 2 |
Michael Matta | 2 |
Milena A. Keller-Margulis | 2 |
More ▼ |
Publication Type
Journal Articles | 28 |
Reports - Research | 27 |
Reports - Evaluative | 5 |
Reports - Descriptive | 3 |
Information Analyses | 2 |
Dissertations/Theses -… | 1 |
Numerical/Quantitative Data | 1 |
Education Level
Audience
Location
West Virginia | 2 |
California | 1 |
Japan | 1 |
Massachusetts | 1 |
Taiwan | 1 |
Utah | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 4 |
Graduate Management Admission… | 2 |
Graduate Record Examinations | 2 |
International English… | 1 |
Praxis Series | 1 |
Test of English for… | 1 |
What Works Clearinghouse Rating
Ikkyu Choi; Jiangang Hao; Chen Li; Michael Fauss; Jakub Novák – ETS Research Report Series, 2024
A frequently encountered security issue in writing tests is nonauthentic text submission: Test takers submit texts that are not their own but rather are copies of texts prepared by someone else. In this report, we propose AutoESD, a human-in-the-loop and automated system to detect nonauthentic texts for a large-scale writing tests, and report its…
Descriptors: Writing Tests, Automation, Cheating, Plagiarism
Jessie S. Barrot – Education and Information Technologies, 2024
This bibliometric analysis attempts to map out the scientific literature on automated writing evaluation (AWE) systems for teaching, learning, and assessment. A total of 170 documents published between 2002 and 2021 in Social Sciences Citation Index journals were reviewed from four dimensions, namely size (productivity and citations), time…
Descriptors: Educational Trends, Automation, Computer Assisted Testing, Writing Tests
Huawei, Shi; Aryadoust, Vahid – Education and Information Technologies, 2023
Automated writing evaluation (AWE) systems are developed based on interdisciplinary research and technological advances such as natural language processing, computer sciences, and latent semantic analysis. Despite a steady increase in research publications in this area, the results of AWE investigations are often mixed, and their validity may be…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Automation
Beseiso, Majdi; Alzubi, Omar A.; Rashaideh, Hasan – Journal of Computing in Higher Education, 2021
E-learning is gradually gaining prominence in higher education, with universities enlarging provision and more students getting enrolled. The effectiveness of automated essay scoring (AES) is thus holding a strong appeal to universities for managing an increasing learning interest and reducing costs associated with human raters. The growth in…
Descriptors: Automation, Scoring, Essays, Writing Tests
Implications of Bias in Automated Writing Quality Scores for Fair and Equitable Assessment Decisions
Matta, Michael; Mercer, Sterett H.; Keller-Margulis, Milena A. – School Psychology, 2023
Recent advances in automated writing evaluation have enabled educators to use automated writing quality scores to improve assessment feasibility. However, there has been limited investigation of bias for automated writing quality scores with students from diverse racial or ethnic backgrounds. The use of biased scores could contribute to…
Descriptors: Bias, Automation, Writing Evaluation, Scoring
Implications of Bias in Automated Writing Quality Scores for Fair and Equitable Assessment Decisions
Michael Matta; Sterett H. Mercer; Milena A. Keller-Margulis – Grantee Submission, 2023
Recent advances in automated writing evaluation have enabled educators to use automated writing quality scores to improve assessment feasibility. However, there has been limited investigation of bias for automated writing quality scores with students from diverse racial or ethnic backgrounds. The use of biased scores could contribute to…
Descriptors: Bias, Automation, Writing Evaluation, Scoring
Li, Rui – British Journal of Educational Technology, 2023
Despite the popularity of automated writing evaluation (AWE) that has provoked an increased scholarly interest, synthesized research to comprehensively understand its pedagogical effects is still in paucity. To fill the gap, this study aims to meta-analyse the overall effect of AWE on learners' writing skill development and whether the effect…
Descriptors: Writing Evaluation, Writing Tests, Automation, Writing Skills
Michael Matta; Milena A. Keller-Margulis; Sterett H. Mercer – Grantee Submission, 2022
Although researchers have investigated technical adequacy and usability of written-expression curriculum-based measures (WE-CBM), the economic implications of different scoring approaches have largely been ignored. The absence of such knowledge can undermine the effective allocation of resources and lead to the adoption of suboptimal measures for…
Descriptors: Cost Effectiveness, Scoring, Automation, Writing Tests
Keller-Margulis, Milena A.; Mercer, Sterett H.; Matta, Michael – Reading and Writing: An Interdisciplinary Journal, 2021
Existing approaches to measuring writing performance are insufficient in terms of both technical adequacy as well as feasibility for use as a screening measure. This study examined the validity and diagnostic accuracy of several approaches to automated text evaluation as well as written expression curriculum-based measurement (WE-CBM) to determine…
Descriptors: Writing Evaluation, Validity, Automation, Curriculum Based Assessment
Mercer, Sterett H.; Cannon, Joanna E.; Squires, Bonita; Guo, Yue; Pinco, Ella – Canadian Journal of School Psychology, 2021
We examined the extent to which automated written expression curriculum-based measurement (aWE-CBM) can be accurately used to computer score student writing samples for screening and progress monitoring. Students (n = 174) with learning difficulties in Grades 1 to 12 who received 1:1 academic tutoring through a community-based organization…
Descriptors: Curriculum Based Assessment, Automation, Scoring, Writing Tests
Keller-Margulis, Milena A.; Mercer, Sterett H.; Matta, Michael – Grantee Submission, 2021
Existing approaches to measuring writing performance are insufficient in terms of both technical adequacy as well as feasibility for use as a screening measure. This study examined the validity and diagnostic accuracy of several approaches to automated text evaluation as well as written expression curriculum-based measurement (WE-CBM) to determine…
Descriptors: Writing Evaluation, Validity, Automation, Curriculum Based Assessment
Mercer, Sterett H.; Cannon, Joanna E.; Squires, Bonita; Guo, Yue; Pinco, Ella – Grantee Submission, 2021
We examined the extent to which automated written expression curriculum-based measurement (aWE-CBM) can be accurately used to computer score student writing samples for screening and progress monitoring. Students (n = 174) with learning difficulties in Grades 1-12 who received 1:1 academic tutoring through a community-based organization completed…
Descriptors: Curriculum Based Assessment, Automation, Scoring, Writing Tests
Joshua Kloppers – International Journal of Computer-Assisted Language Learning and Teaching, 2023
Automated writing evaluation (AWE) software is an increasingly popular tool for English second language learners. However, research on the accuracy of such software has been both scarce and largely limited in its scope. As such, this article broadens the field of research on AWE accuracy by using a mixed design to holistically evaluate the…
Descriptors: Grammar, Automation, Writing Evaluation, Computer Assisted Instruction
Wilson, Joshua; Rodrigues, Jessica – Grantee Submission, 2020
The present study leveraged advances in automated essay scoring (AES) technology to explore a proof of concept for a writing screener using the "Project Essay Grade" (PEG) program. First, the study investigated the extent to which an AES-scored multi-prompt writing screener accurately classified students as at risk of failing a Common…
Descriptors: Writing Tests, Screening Tests, Classification, Accuracy
Bateson, Gordon – International Journal of Computer-Assisted Language Learning and Teaching, 2021
As a result of the Japanese Ministry of Education's recent edict that students' written and spoken English should be assessed in university entrance exams, there is an urgent need for tools to help teachers and students prepare for these exams. Although some commercial tools already exist, they are generally expensive and inflexible. To address…
Descriptors: Test Construction, Computer Assisted Testing, Internet, Writing Tests