Publication Date
In 2025 | 73 |
Since 2024 | 275 |
Since 2021 (last 5 years) | 891 |
Since 2016 (last 10 years) | 1656 |
Since 2006 (last 20 years) | 2402 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
Practitioners | 584 |
Teachers | 481 |
Researchers | 102 |
Students | 47 |
Administrators | 43 |
Policymakers | 12 |
Parents | 8 |
Community | 1 |
Media Staff | 1 |
Location
Canada | 145 |
China | 118 |
Iran | 70 |
Turkey | 69 |
Australia | 67 |
California | 49 |
United Kingdom | 45 |
Japan | 44 |
Indonesia | 41 |
Saudi Arabia | 37 |
Taiwan | 34 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Meets WWC Standards without Reservations | 1 |
Meets WWC Standards with or without Reservations | 5 |
Does not meet standards | 4 |
Falhasiri, Mohammad – Canadian Journal of Applied Linguistics / Revue canadienne de linguistique appliquée, 2021
An underexplored question, and one with potentially far-reaching implications for the practice of written corrective feedback (WCF), is whether to mark a wide range of errors (comprehensive feedback) or to focus on a few error types (focused feedback) in learners' L2 writing. Despite limited evidence, it is argued that comprehensive WCF is…
Descriptors: Writing Evaluation, Feedback (Response), Error Correction, Second Language Learning
Deng, Yaochen; Lei, Lei; Liu, Dilin – Applied Linguistics, 2021
In the past two decades, syntactic complexity measures (e.g. the length or number of words per clause/t-unit/sentences and number of clauses per t-unit/sentence, and types of clauses used) have been widely used to determine and benchmark language proficiency development in speaking and writing. (Norris and Ortega 2009; Lu 2011). However, the…
Descriptors: Syntax, Phrase Structure, Second Language Learning, Second Language Instruction
Keller-Margulis, Milena A.; Mercer, Sterett H.; Matta, Michael – Reading and Writing: An Interdisciplinary Journal, 2021
Existing approaches to measuring writing performance are insufficient in terms of both technical adequacy as well as feasibility for use as a screening measure. This study examined the validity and diagnostic accuracy of several approaches to automated text evaluation as well as written expression curriculum-based measurement (WE-CBM) to determine…
Descriptors: Writing Evaluation, Validity, Automation, Curriculum Based Assessment
Romig, John Elwood; Olsen, Amanda A. – Reading & Writing Quarterly, 2021
Compared to other content areas, there is a dearth of research examining curriculum-based measurement of writing (CBM-W). This study conducted a conceptual replication examining the reliability, stability, and sensitivity to growth of slopes produced from CBM-W. Eighty-nine (N = 89) eighth-grade students responded to one CBM-W probe weekly for 11…
Descriptors: Curriculum Based Assessment, Writing Evaluation, Middle School Students, Grade 8
Uto, Masaki; Okano, Masashi – IEEE Transactions on Learning Technologies, 2021
In automated essay scoring (AES), scores are automatically assigned to essays as an alternative to grading by humans. Traditional AES typically relies on handcrafted features, whereas recent studies have proposed AES models based on deep neural networks to obviate the need for feature engineering. Those AES models generally require training on a…
Descriptors: Essays, Scoring, Writing Evaluation, Item Response Theory
Keller-Margulis, Milena A.; Mercer, Sterett H.; Matta, Michael – Grantee Submission, 2021
Existing approaches to measuring writing performance are insufficient in terms of both technical adequacy as well as feasibility for use as a screening measure. This study examined the validity and diagnostic accuracy of several approaches to automated text evaluation as well as written expression curriculum-based measurement (WE-CBM) to determine…
Descriptors: Writing Evaluation, Validity, Automation, Curriculum Based Assessment
Shahzad, Areeba; Wali, Aamir – Education and Information Technologies, 2022
Checking essays written by students is a very time consuming task. Besides spellings and grammar, they also need to be evaluated on their semantic content such as cohesion, coherence, etc. In this study we focus on one such aspect of semantic content which is the topic of the essay. Putting it formally, given a prompt or essay-statement and an…
Descriptors: Computer Uses in Education, Essays, Writing Evaluation, Semantics
Polat, Murat; Turhan, Nihan S.; Toraman, Cetin – Pegem Journal of Education and Instruction, 2022
Testing English writing skills could be multi-dimensional; thus, the study aimed to compare students' writing scores calculated according to Classical Test Theory (CTT) and Multi-Facet Rasch Model (MFRM). The research was carried out in 2019 with 100 university students studying at a foreign language preparatory class and four experienced…
Descriptors: Comparative Analysis, Test Theory, Item Response Theory, Student Evaluation
Runge, Timothy J. – Communique, 2022
Reading, writing, and mathematics are widely regarded as foundational academic skills upon which many other academic skills depend. Consequently, each receives a considerable allocation of resources for instruction, assessment, and intervention in K-12 education (Hooper, 2002). An additional indicator of the importance of these skills is the…
Descriptors: High School Students, Writing Skills, Written Language, Writing Evaluation
Jacobson, Brad; Bach, Amy – Journal of Literacy Research, 2022
This article adds to a growing body of research tracing the influence of neoliberal education reforms on policy and practice by showing the ways in which student writers are positioned within market-oriented discourses and values through Texas state exam writing prompts. As a genre-in-use, the writing prompts are seemingly mundane texts that…
Descriptors: Neoliberalism, Educational Change, Standardized Tests, Achievement Tests
Quinn, Margaret F.; Bingham, Gary E. – Early Education and Development, 2022
Early writing is a foundational component of emergent literacy. Despite recent increases in early writing research, studies often narrowly focus on transcription (i.e., letter and/or name writing, spelling) to the exclusion of their ability to compose or generate ideas and translate into writing. Research investigating composing approaches it in…
Descriptors: Emergent Literacy, Writing (Composition), Writing Skills, Writing Evaluation
Kong, Yunjun; Molnár, Edit Katalin; Xu, Nan – SAGE Open, 2022
In research on EFL writing, much attention has been paid to teachers' practises in assessment or feedback, but little is known about teachers' behaviors in these two domains as a whole. There also seems to be a paucity of research on how teachers' reactions to student writing develop from pre- through in-service. The current study, using a…
Descriptors: Preservice Teachers, Language Teachers, Writing Evaluation, Feedback (Response)
Tremblay, Kathryn A.; Binder, Katherine S.; Chuy, Anneli – COABE Journal: The Resource for Adult Education, 2022
This study sought to examine the pre-task planning and post-task revising practices of adults with low literacy and how those practices affect overall writing quality. Seventy-six adults with low literacy composed essays in response to a prompt and were given time for pre-task planning and post-task revising. Results showed that participants with…
Descriptors: Adult Students, Adult Basic Education, Adult Literacy, Writing Skills
McCaffrey, Daniel F.; Zhang, Mo; Burstein, Jill – Grantee Submission, 2022
Background: This exploratory writing analytics study uses argumentative writing samples from two performance contexts--standardized writing assessments and university English course writing assignments--to compare: (1) linguistic features in argumentative writing; and (2) relationships between linguistic characteristics and academic performance…
Descriptors: Persuasive Discourse, Academic Language, Writing (Composition), Academic Achievement
Tong Li; Sarah D. Creer; Tracy Arner; Rod D. Roscoe; Laura K. Allen; Danielle S. McNamara – Grantee Submission, 2022
Automated writing evaluation (AWE) tools can facilitate teachers' analysis of and feedback on students' writing. However, increasing evidence indicates that writing instructors experience challenges in implementing AWE tools successfully. For this reason, our development of the Writing Analytics Tool (WAT) has employed a participatory approach…
Descriptors: Automation, Writing Evaluation, Learning Analytics, Participatory Research