Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 23 |
Descriptor
Source
Assessing Writing | 31 |
Author
Publication Type
Journal Articles | 31 |
Reports - Research | 18 |
Reports - Evaluative | 13 |
Education Level
Higher Education | 16 |
Postsecondary Education | 10 |
Elementary Secondary Education | 4 |
Elementary Education | 1 |
Grade 6 | 1 |
Secondary Education | 1 |
Audience
Teachers | 2 |
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Does not meet standards | 1 |
Deane, Paul – Assessing Writing, 2013
This paper examines the construct measured by automated essay scoring (AES) systems. AES systems measure features of the text structure, linguistic structure, and conventional print form of essays; as such, the systems primarily measure text production skills. In the current state-of-the-art, AES provide little direct evidence about such matters…
Descriptors: Scoring, Essays, Text Structure, Writing (Composition)
Ramineni, Chaitanya – Assessing Writing, 2013
In this paper, I describe the design and evaluation of automated essay scoring (AES) models for an institution's writing placement program. Information was gathered on admitted student writing performance at a science and technology research university in the northeastern United States. Under timed conditions, first-year students (N = 879) were…
Descriptors: Validity, Comparative Analysis, Internet, Student Placement
Klobucar, Andrew; Elliot, Norbert; Deess, Perry; Rudniy, Oleksandr; Joshi, Kamal – Assessing Writing, 2013
This study investigated the use of automated essay scoring (AES) to identify at-risk students enrolled in a first-year university writing course. An application of AES, the "Criterion"[R] Online Writing Evaluation Service was evaluated through a methodology focusing on construct modelling, response processes, disaggregation, extrapolation,…
Descriptors: Writing Evaluation, Scoring, Writing Instruction, Essays
Condon, William – Assessing Writing, 2013
Automated Essay Scoring (AES) has garnered a great deal of attention from the rhetoric and composition/writing studies community since the Educational Testing Service began using e-rater[R] and the "Criterion"[R] Online Writing Evaluation Service as products in scoring writing tests, and most of the responses have been negative. While the…
Descriptors: Measurement, Psychometrics, Evaluation Methods, Educational Testing
Weigle, Sara Cushing – Assessing Writing, 2013
This article presents considerations for using automated scoring systems to evaluate second language writing. A distinction is made between English language learners in English-medium educational systems and those studying English in their own countries for a variety of purposes, and between learning-to-write and writing-to-learn in a second…
Descriptors: Scoring, Second Language Learning, Second Languages, English Language Learners
Fritz, Erik; Ruegg, Rachael – Assessing Writing, 2013
Although raters can be trained to evaluate the lexical qualities of student essays, the question remains as to what extent raters follow the "lexis" scale descriptors in the rating scale when evaluating or rate according to their own criteria. The current study examines the extent to which 27 trained university EFL raters take various lexical…
Descriptors: Accuracy, Rating Scales, English (Second Language), Essays
Bridgeman, Brent; Trapani, Catherine; Bivens-Tatum, Jennifer – Assessing Writing, 2011
Writing task variants can increase test security in high-stakes essay assessments by substantially increasing the pool of available writing stimuli and by making the specific writing task less predictable. A given prompt (parent) may be used as the basis for one or more different variants. Six variant types based on argument essay prompts from a…
Descriptors: Writing Evaluation, Writing Tests, Tests, Writing Instruction
Esfandiari, Rajab; Myford, Carol M. – Assessing Writing, 2013
We compared three assessor types (self-assessors, peer-assessors, and teacher assessors) to determine whether they differed in the levels of severity they exercised when rating essays. We analyzed the ratings of 194 assessors who evaluated 188 essays that students enrolled in two state-run universities in Iran wrote. The assessors employed a…
Descriptors: Foreign Countries, Severity (of Disability), Essays, Gender Differences
Johnson, David; VanBrackle, Lewis – Assessing Writing, 2012
Raters of Georgia's (USA) state-mandated college-level writing exam, which is intended to ensure a minimal university-level writing competency, are trained to grade holistically when assessing these exams. A guiding principle in holistic grading is to not focus exclusively on any one aspect of writing but rather to give equal weight to style,…
Descriptors: Writing Evaluation, Linguistics, Writing Tests, English (Second Language)
Evans, Donna – Assessing Writing, 2009
This is the story of a research journey that follows the trail of a novel evaluand--"place." I examine place as mentioned by rising juniors in timed exams. Using a hybridized methodology--the qualitative approach of a hermeneutic dialectic process as described by Guba and Lincoln (1989), and the quantitative evidence of place mention--I query…
Descriptors: Student Motivation, Student Experience, Writing Evaluation, Writing Tests
Benevento, Cathleen; Storch, Neomy – Assessing Writing, 2011
Much of second language (L2) class time, particularly in school and university classes, is devoted to the teaching of writing, and written assignments form an important component of assessed work. We assume that learners' L2 writing develops over time, in response to instruction, feedback, and practice. However, to date there has been very little…
Descriptors: Feedback (Response), Assignments, Writing (Composition), Intervals
Wiseman, Cynthia S. – Assessing Writing, 2012
The decision-making behaviors of 8 raters when scoring 39 persuasive and 39 narrative essays written by second language learners were examined, first using Rasch analysis and then, through think aloud protocols. Results based on Rasch analysis and think aloud protocols recorded by raters as they were scoring holistically and analytically suggested…
Descriptors: Self Concept, Protocol Analysis, Scoring, Item Response Theory
Diab, Nuwar Mawlawi – Assessing Writing, 2011
This paper reports on a quasi-experimental study comparing the effects of peer-editing to self-editing on improving students' revised drafts. The study involved two intact classes (experimental and control groups) of an English course. The experimental group practiced peer-editing while the control group engaged in self-editing. After receiving…
Descriptors: Feedback (Response), Experimental Groups, Control Groups, Learning Strategies
Worden, Dorothy L. – Assessing Writing, 2009
It is widely assumed that the constraints of timed essay exams will make it virtually impossible for students to engage in the major hallmarks of the writing process, especially revision, in testing situations. This paper presents the results of a study conducted at Washington State University in the Spring of 2008. The study examined the…
Descriptors: Timed Tests, Writing Evaluation, Writing Tests, Educational Assessment
Anthony, Jared Judd – Assessing Writing, 2009
Testing the hypotheses that reflective timed-essay prompts should elicit memories of meaningful experiences in students' undergraduate education, and that computer-mediated classroom experiences should be salient among those memories, a combination of quantitative and qualitative research methods paints a richer, more complex picture than either…
Descriptors: Undergraduate Study, Qualitative Research, Research Methodology, Reflection