NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 5 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Colombini, Crystal Broch; McBride, Maureen – Assessing Writing, 2012
Composition assessment scholars have exhibited uneasiness with the language of norming grounded in distaste for the psychometric assumption that achievement of consensus in a communal assessment setting is desirable even at the cost of individual pedagogical values. Responding to the problems of a "reliability" defined by homogenous agreement,…
Descriptors: Writing Evaluation, Conflict, Test Norms, Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Ramineni, Chaitanya – Assessing Writing, 2013
In this paper, I describe the design and evaluation of automated essay scoring (AES) models for an institution's writing placement program. Information was gathered on admitted student writing performance at a science and technology research university in the northeastern United States. Under timed conditions, first-year students (N = 879) were…
Descriptors: Validity, Comparative Analysis, Internet, Student Placement
Peer reviewed Peer reviewed
Direct linkDirect link
Knoch, Ute – Assessing Writing, 2011
Rating scales act as the de facto test construct in a writing assessment, although inevitably as a simplification of the construct (North, 2003). However, it is often not reported how rating scales are constructed. Unless the underlying framework of a rating scale takes some account of linguistic theory and research in the definition of…
Descriptors: Writing Evaluation, Writing Tests, Rating Scales, Linguistic Theory
Peer reviewed Peer reviewed
Direct linkDirect link
James, Cindy L. – Assessing Writing, 2006
How do scores from writing samples generated by computerized essay scorers compare to those generated by ''untrained'' human scorers and what combination of scores, if any, is more accurate at placing students in composition courses? This study endeavored to answer this two-part question by evaluating the correspondence between writing sample…
Descriptors: Writing (Composition), Predictive Validity, Scoring, Validity
Peer reviewed Peer reviewed
Gearhart, Maryl; Herman, Joan L.; Novak, John R.; Wolf, Shelby A. – Assessing Writing, 1995
Discusses the possible disjunct between what is good for large-scale assessment and what is good for teaching and learning. Represents one attempt to "marry" large-scale and classroom perspectives. Presents background and rationale for a new narrative rubric that was designed to support classroom instruction. Presents evidence for the…
Descriptors: Higher Education, Instructional Effectiveness, Models, Scoring