Publication Date
| In 2026 | 0 |
| Since 2025 | 161 |
| Since 2022 (last 5 years) | 772 |
| Since 2017 (last 10 years) | 1633 |
| Since 2007 (last 20 years) | 2443 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 585 |
| Teachers | 484 |
| Researchers | 103 |
| Students | 48 |
| Administrators | 43 |
| Policymakers | 13 |
| Parents | 8 |
| Community | 1 |
| Counselors | 1 |
| Media Staff | 1 |
Location
| Canada | 146 |
| China | 128 |
| Turkey | 72 |
| Iran | 70 |
| Australia | 68 |
| California | 49 |
| United Kingdom | 45 |
| Indonesia | 44 |
| Japan | 44 |
| Thailand | 38 |
| Saudi Arabia | 37 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 1 |
| Meets WWC Standards with or without Reservations | 5 |
| Does not meet standards | 4 |
Myers, Aaron J.; Ames, Allison J.; Leventhal, Brian C.; Holzman, Madison A. – Applied Measurement in Education, 2020
When rating performance assessments, raters may ascribe different scores for the same performance when rubric application does not align with the intended application of the scoring criteria. Given performance assessment score interpretation assumes raters apply rubrics as rubric developers intended, misalignment between raters' scoring processes…
Descriptors: Scoring Rubrics, Validity, Item Response Theory, Interrater Reliability
Wan, Qian; Crossley, Scott; Allen, Laura; McNamara, Danielle – Grantee Submission, 2020
In this paper, we extracted content-based and structure-based features of text to predict human annotations for claims and nonclaims in argumentative essays. We compared Logistic Regression, Bernoulli Naive Bayes, Gaussian Naive Bayes, Linear Support Vector Classification, Random Forest, and Neural Networks to train classification models. Random…
Descriptors: Persuasive Discourse, Essays, Writing Evaluation, Natural Language Processing
Zhang, Haoran; Litman, Diane – Grantee Submission, 2020
While automated essay scoring (AES) can reliably grade essays at scale, automated writing evaluation (AWE) additionally provides formative feedback to guide essay revision. However, a neural AES typically does not provide useful feature representations for supporting AWE. This paper presents a method for linking AWE and neural AES, by extracting…
Descriptors: Computer Assisted Testing, Scoring, Essay Tests, Writing Evaluation
Elena Shvidko – Journal on Empowering Teaching Excellence, 2020
Providing feedback on student work is a fundamental aspect of instruction and an important part of the learning process. A considerable amount of literature describes the pedagogical value of different types of feedback--explicit vs. implicit, comprehensive vs. selective, direct vs. indirect, and feedback on content vs. feedback on form--thus…
Descriptors: Feedback (Response), Writing (Composition), Teacher Response, Teacher Student Relationship
Husemann, Charlotte – History Education Research Journal, 2023
The purpose of this study was to examine the writing skills of 7th- and 8th-grade students with a high proportion of migration background in North Rhine-Westphalia, Germany. The study was part of the SchriFT project (2017-20), funded by the Federal Ministry of Education and Research. A writing task was given on the topic: "Why can we only…
Descriptors: Content Area Writing, Writing Skills, Thinking Skills, History Instruction
Beck, Sarah W.; del Calvo, Andrew O. – Literacy, 2023
Though discipline-specific approaches to literacy instruction can support adolescents' academic literacy and identity development, scant attention has been paid to ways of targeting such instruction to address individual student needs. Dialogic writing assessment is an approach to conducting writing conferences that foregrounds students' composing…
Descriptors: Writing Evaluation, Dialogs (Language), Social Studies, History Instruction
Qianqian Zhang-Wu; Alison Stephens; Neal Lerner – Composition Studies, 2023
Our research explores the meaningful writing experiences of 325 undergraduate students who self-identify as multilingual. Through qualitative coding of open-ended survey data, we found that respondents considered their writing meaningful when it allowed them to make personal and relevant connections and learn new skills and strategies. Our…
Descriptors: Undergraduate Students, Multilingualism, Writing Instruction, Writing Assignments
Carla Wood; Miguel Garcia-Salas; Christopher Schatschneider – Grantee Submission, 2023
Purpose: The aim of this study was to advance the analysis of written language transcripts by validating an automated scoring procedure using an automated open-access tool for calculating morphological complexity (MC) from written transcripts. Method: The MC of words in 146 written responses of students in fifth grade was assessed using two…
Descriptors: Automation, Computer Assisted Testing, Scoring, Computation
Deborah K. Reed; Kelly Binning; Emily A. Jemison; Nicole DeSalle – Learning Disabilities Research & Practice, 2023
Increased expectations for writing performance have created a need for formative writing assessments that will help middle school teachers better understand adolescents' grade-appropriate writing skills and monitor the progress of students with or at risk for writing disabilities. In this practice piece, we first explain research-based…
Descriptors: Formative Evaluation, Writing Evaluation, Middle School Students, Prompting
Megumi E. Takada; Christopher J. Lemons; Lakshmi Balasubramanian; Bonnie T. Hallman; Stephanie Al Otaiba; Cynthia S. Puranik – Grantee Submission, 2023
There have been a handful of studies on kindergarteners' motivational beliefs about writing, yet measuring these beliefs in young children continues to pose a set of challenges. The purpose of this exploratory, mixed-methods study was to examine how kindergarteners understand and respond to different assessment formats designed to capture their…
Descriptors: Kindergarten, Young Children, Student Attitudes, Student Motivation
Rebecca Hallman Martini – Writing Center Journal, 2023
Despite their history of marginalization, writing centers need to be spaces where consultants, writers, and administrators act with agency. This requires both knowing when and how to act, as well as deciding when to yield. In challenging policies of seeming neutrality, I argue in this manuscript that writing center practitioners can center the…
Descriptors: Writing Instruction, Writing (Composition), Laboratories, Writing Teachers
Zhang, Haoran; Litman, Diane – Grantee Submission, 2018
This paper presents an investigation of using a co-attention based neural network for source-dependent essay scoring. We use a co-attention mechanism to help the model learn the importance of each part of the essay more accurately. Also, this paper shows that the co-attention based neural network model provides reliable score prediction of…
Descriptors: Essays, Scoring, Automation, Artificial Intelligence
Pruchnic, Jeff; Barton, Ellen; Primeau, Sarah; Trimble, Thomas; Varty, Nicole; Foster, Tanina – Composition Forum, 2021
Over the past two decades, reflective writing has occupied an increasingly prominent position in composition theory, pedagogy, and assessment as researchers have described the value of reflection and reflective writing in college students' development of higher-order writing skills, such as genre conventions (Yancey, "Reflection";…
Descriptors: Reflection, Correlation, Essays, Freshman Composition
Sipitanos, Konstantinos – International Journal of Education and Literacy Studies, 2021
Critical literacy practices have moved their interest from Freirean binary analyses (e.g. oppressor versus oppressed) to more complex perspectives, where in a text the author/speaker is (dis) aligned with different discourse communities. Despite the fact that these teaching practices that are based in multiple discources are gaining attention,…
Descriptors: Foreign Countries, Junior High School Students, Student Evaluation, Critical Literacy
Shin, Jinnie; Gierl, Mark J. – Language Testing, 2021
Automated essay scoring (AES) has emerged as a secondary or as a sole marker for many high-stakes educational assessments, in native and non-native testing, owing to remarkable advances in feature engineering using natural language processing, machine learning, and deep-neural algorithms. The purpose of this study is to compare the effectiveness…
Descriptors: Scoring, Essays, Writing Evaluation, Computer Software

Peer reviewed
Direct link
