Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 13 |
Descriptor
Validity | 15 |
Writing Evaluation | 11 |
Scoring | 7 |
Computer Assisted Testing | 5 |
Essays | 5 |
Models | 5 |
Essay Tests | 4 |
Psychometrics | 4 |
Writing (Composition) | 4 |
Writing Instruction | 4 |
Comparative Analysis | 3 |
More ▼ |
Source
Assessing Writing | 15 |
Author
Knoch, Ute | 2 |
Aull, Laura | 1 |
Callahan, Susan | 1 |
Colombini, Crystal Broch | 1 |
Condon, William | 1 |
Deane, Paul | 1 |
Erling, Elizabeth J. | 1 |
Gearhart, Maryl | 1 |
Gere, Anne Ruggles | 1 |
Green, Timothy | 1 |
Harsch, Claudia | 1 |
More ▼ |
Publication Type
Journal Articles | 15 |
Reports - Evaluative | 7 |
Reports - Research | 6 |
Reports - Descriptive | 2 |
Education Level
Higher Education | 6 |
Postsecondary Education | 6 |
Elementary Secondary Education | 2 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Harsch, Claudia; Martin, Guido – Assessing Writing, 2012
We explore how a local rating scale can be based on the Common European Framework CEF-proficiency scales. As part of the scale validation (Alderson, 1991; Lumley, 2002), we examine which adaptations are needed to turn CEF-proficiency descriptors into a rating scale for a local context, and to establish a practicable method to revise the initial…
Descriptors: Rating Scales, Validity, Media Adaptation, Feedback (Response)
Colombini, Crystal Broch; McBride, Maureen – Assessing Writing, 2012
Composition assessment scholars have exhibited uneasiness with the language of norming grounded in distaste for the psychometric assumption that achievement of consensus in a communal assessment setting is desirable even at the cost of individual pedagogical values. Responding to the problems of a "reliability" defined by homogenous agreement,…
Descriptors: Writing Evaluation, Conflict, Test Norms, Reliability
Deane, Paul – Assessing Writing, 2013
This paper examines the construct measured by automated essay scoring (AES) systems. AES systems measure features of the text structure, linguistic structure, and conventional print form of essays; as such, the systems primarily measure text production skills. In the current state-of-the-art, AES provide little direct evidence about such matters…
Descriptors: Scoring, Essays, Text Structure, Writing (Composition)
Ramineni, Chaitanya – Assessing Writing, 2013
In this paper, I describe the design and evaluation of automated essay scoring (AES) models for an institution's writing placement program. Information was gathered on admitted student writing performance at a science and technology research university in the northeastern United States. Under timed conditions, first-year students (N = 879) were…
Descriptors: Validity, Comparative Analysis, Internet, Student Placement
Knoch, Ute – Assessing Writing, 2011
Rating scales act as the de facto test construct in a writing assessment, although inevitably as a simplification of the construct (North, 2003). However, it is often not reported how rating scales are constructed. Unless the underlying framework of a rating scale takes some account of linguistic theory and research in the definition of…
Descriptors: Writing Evaluation, Writing Tests, Rating Scales, Linguistic Theory
Condon, William – Assessing Writing, 2013
Automated Essay Scoring (AES) has garnered a great deal of attention from the rhetoric and composition/writing studies community since the Educational Testing Service began using e-rater[R] and the "Criterion"[R] Online Writing Evaluation Service as products in scoring writing tests, and most of the responses have been negative. While the…
Descriptors: Measurement, Psychometrics, Evaluation Methods, Educational Testing
Weigle, Sara Cushing – Assessing Writing, 2013
This article presents considerations for using automated scoring systems to evaluate second language writing. A distinction is made between English language learners in English-medium educational systems and those studying English in their own countries for a variety of purposes, and between learning-to-write and writing-to-learn in a second…
Descriptors: Scoring, Second Language Learning, Second Languages, English Language Learners
Gere, Anne Ruggles; Aull, Laura; Green, Timothy; Porter, Anne – Assessing Writing, 2010
Following Messick's definition of validity as a multi-faceted construct that includes contextual, substantive, structural, generalizable, external, and consequential dimensions, this study examined an established directed self-placement (DSP) system that had been functioning for ten years at a large university. The goal was to determine the extent…
Descriptors: Freshman Composition, Validity, Student Placement, Developmental Studies Programs
Rezaei, Ali Reza; Lovorn, Michael – Assessing Writing, 2010
This experimental project investigated the reliability and validity of rubrics in assessment of students' written responses to a social science "writing prompt". The participants were asked to grade one of the two samples of writing assuming it was written by a graduate student. In fact both samples were prepared by the authors. The…
Descriptors: Spelling, Sentence Structure, Punctuation, Social Sciences
Reinheimer, David A. – Assessing Writing, 2007
This study outlines a response to composition placement review based on the understanding that demonstrating validity is an argumentative act. The response defines the validity argument through principles of sound assessment: using multiple, local methods in order to improve program performance. The study thus eschews the traditional course-grade…
Descriptors: Placement, Validity, Program Evaluation
Erling, Elizabeth J.; Richardson, John T. E. – Assessing Writing, 2010
Measuring the Academic Skills of University Students is a procedure developed in the 1990s at the University of Sydney's Language Centre to identify students in need of academic writing development by assessing examples of their written work against five criteria. This paper reviews the literature relating to the development of the procedure with…
Descriptors: Foreign Countries, Writing Evaluation, Assignments, Psychometrics
James, Cindy L. – Assessing Writing, 2006
How do scores from writing samples generated by computerized essay scorers compare to those generated by ''untrained'' human scorers and what combination of scores, if any, is more accurate at placing students in composition courses? This study endeavored to answer this two-part question by evaluating the correspondence between writing sample…
Descriptors: Writing (Composition), Predictive Validity, Scoring, Validity
Knoch, Ute – Assessing Writing, 2007
The category of coherence in rating scales has often been criticized for being vague. Typical descriptors might describe students' writing as having 'a clear progression of ideas' or "lacking logical sequencing." These descriptors inevitably require subjective interpretation on the side of the raters. A number of researchers (Connor & Farmer,…
Descriptors: Scripts, Rhetoric, Rating Scales, Writing (Composition)

Gearhart, Maryl; Herman, Joan L.; Novak, John R.; Wolf, Shelby A. – Assessing Writing, 1995
Discusses the possible disjunct between what is good for large-scale assessment and what is good for teaching and learning. Represents one attempt to "marry" large-scale and classroom perspectives. Presents background and rationale for a new narrative rubric that was designed to support classroom instruction. Presents evidence for the…
Descriptors: Higher Education, Instructional Effectiveness, Models, Scoring

Callahan, Susan – Assessing Writing, 1999
Examines the response of one high school to three of the explicit aims of the Kentucky writing portfolio assessment. Suggests limitations to the presumed validity of the assessment by revealing some of the intended and unintended consequences of the state's attempt to use the assessment to shape school writing programs, to encourage classroom…
Descriptors: High Schools, Portfolio Assessment, Portfolios (Background Materials), Program Development