NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 6 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Jinyan – Assessing Writing, 2012
Using generalizability (G-) theory, this study examined the accuracy and validity of the writing scores assigned to secondary school ESL students in the provincial English examinations in Canada. The major research question that guided this study was: Are there any differences between the accuracy and construct validity of the analytic scores…
Descriptors: Foreign Countries, Generalizability Theory, Writing Evaluation, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Ramineni, Chaitanya – Assessing Writing, 2013
In this paper, I describe the design and evaluation of automated essay scoring (AES) models for an institution's writing placement program. Information was gathered on admitted student writing performance at a science and technology research university in the northeastern United States. Under timed conditions, first-year students (N = 879) were…
Descriptors: Validity, Comparative Analysis, Internet, Student Placement
Peer reviewed Peer reviewed
Direct linkDirect link
Condon, William – Assessing Writing, 2013
Automated Essay Scoring (AES) has garnered a great deal of attention from the rhetoric and composition/writing studies community since the Educational Testing Service began using e-rater[R] and the "Criterion"[R] Online Writing Evaluation Service as products in scoring writing tests, and most of the responses have been negative. While the…
Descriptors: Measurement, Psychometrics, Evaluation Methods, Educational Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Knoch, Ute – Assessing Writing, 2007
The category of coherence in rating scales has often been criticized for being vague. Typical descriptors might describe students' writing as having 'a clear progression of ideas' or "lacking logical sequencing." These descriptors inevitably require subjective interpretation on the side of the raters. A number of researchers (Connor & Farmer,…
Descriptors: Scripts, Rhetoric, Rating Scales, Writing (Composition)
Peer reviewed Peer reviewed
Lee, Young-Ju – Assessing Writing, 2002
Explores plausible differences in composing processes when English as a Second Language (ESL) students write timed-essays on paper and on the computer. Examines the way in which the quality of the written products differs across paper and computer modes. Reveals that individual participants are engaged in different ways and to differing degrees by…
Descriptors: Comparative Analysis, Computer Assisted Testing, Construct Validity, English (Second Language)
Peer reviewed Peer reviewed
Moss, Pamela A. – Assessing Writing, 1994
Discusses the tension between testing and literacy education, the deleterious influences of testing on the curriculum, and the lack of a firm relationship between improved test scores and improved educational experiences for students. Presents two alternative systems for evaluation of writing skills at the local and state level. Demonstrates the…
Descriptors: Comparative Analysis, Elementary Secondary Education, Literature Reviews, Outcomes of Education