NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Assessments and Surveys
Florida Comprehensive…1
What Works Clearinghouse Rating
Showing 1 to 15 of 25 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Katherine L. Buchanan; Milena Keller-Margulis; Amanda Hut; Weihua Fan; Sarah S. Mire; G. Thomas Schanding Jr. – Early Childhood Education Journal, 2025
There is considerable research regarding measures of early reading but much less in early writing. Nevertheless, writing is a critical skill for success in school and early difficulties in writing are likely to persist without intervention. A necessary step toward identifying those students who need additional support is the use of screening…
Descriptors: Writing Evaluation, Evaluation Methods, Emergent Literacy, Beginning Writing
Peer reviewed Peer reviewed
Direct linkDirect link
Ping-Lin Chuang – Language Testing, 2025
This experimental study explores how source use features impact raters' judgment of argumentation in a second language (L2) integrated writing test. One hundred four experienced and novice raters were recruited to complete a rating task that simulated the scoring assignment of a local English Placement Test (EPT). Sixty written responses were…
Descriptors: Interrater Reliability, Evaluators, Information Sources, Primary Sources
Peer reviewed Peer reviewed
Direct linkDirect link
Yue Huang; Joshua Wilson – Journal of Computer Assisted Learning, 2025
Background: Automated writing evaluation (AWE) systems, used as formative assessment tools in writing classrooms, are promising for enhancing instruction and improving student performance. Although meta-analytic evidence supports AWE's effectiveness in various contexts, research on its effectiveness in the U.S. K-12 setting has lagged behind its…
Descriptors: Writing Evaluation, Writing Skills, Writing Tests, Writing Instruction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Abdullah Alshakhi – Educational Process: International Journal, 2025
Background/purpose: Writing is an essential skill for EFL learners, and the development of flawless writing proficiency in English is the intended learner outcome of every writing course in an EFL program. Development of flawless writing skills involves perfection in orthography (spelling, punctuation, capitalization), grammaticality and syntax,…
Descriptors: Foreign Countries, Language Teachers, English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Gioia, Anthony R.; Ahmed, Yusra; Woods, Steven P.; Cirino, Paul T. – Reading and Writing: An Interdisciplinary Journal, 2023
There is significant overlap between reading and writing, but no known standardized measure assesses these jointly. The goal of the present study is to evaluate the properties of a novel measure, the Assessment of Writing, Self-Monitoring, and Reading (AWSM Reader), that simultaneously evaluates both reading comprehension and writing. In doing so,…
Descriptors: Reading Writing Relationship, Writing Evaluation, Self Evaluation (Individuals), Executive Function
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sumner, Josh – Research-publishing.net, 2021
Comparative Judgement (CJ) has emerged as a technique that typically makes use of holistic judgement to assess difficult-to-specify constructs such as production (speaking and writing) in Modern Foreign Languages (MFL). In traditional approaches, markers assess candidates' work one-by-one in an absolute manner, assigning scores to different…
Descriptors: Holistic Approach, Student Evaluation, Comparative Analysis, Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
McGrane, Joshua Aaron; Humphry, Stephen Mark; Heldsinger, Sandra – Applied Measurement in Education, 2018
National standardized assessment programs have increasingly included extended written performances, amplifying the need for reliable, valid, and efficient methods of assessment. This article examines a two-stage method using comparative judgments and calibrated exemplars as a complement and alternative to existing methods of assessing writing.…
Descriptors: Standardized Tests, Foreign Countries, Writing Tests, Writing Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Humphry, Stephen M.; McGrane, Joshua A. – Australian Educational Researcher, 2015
This paper presents a method for equating writing assessments using pairwise comparisons which does not depend upon conventional common-person or common-item equating designs. Pairwise comparisons have been successfully applied in the assessment of open-ended tasks in English and other areas such as visual art and philosophy. In this paper,…
Descriptors: Writing Evaluation, Evaluation Methods, Comparative Analysis, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Hampton, David D.; Lembke, Erica S. – Reading & Writing Quarterly, 2016
The purpose of this study was to examine 4 early writing measures used to monitor the early writing progress of 1st-grade students. We administered the measures to 23 1st-grade students biweekly for a total of 16 weeks. We obtained 3-min samples and conducted analyses for each 1-min increment. We scored samples using 2 different methods: correct…
Descriptors: Progress Monitoring, Curriculum Based Assessment, Writing Tests, Outcome Measures
Peer reviewed Peer reviewed
Direct linkDirect link
Barkaoui, Khaled – Assessment in Education: Principles, Policy & Practice, 2011
This study examined the effects of marking method and rater experience on ESL (English as a Second Language) essay test scores and rater performance. Each of 31 novice and 29 experienced raters rated a sample of ESL essays both holistically and analytically. Essay scores were analysed using a multi-faceted Rasch model to compare test-takers'…
Descriptors: Writing Evaluation, Writing Tests, Essay Tests, Interrater Reliability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wang, Ping – English Language Teaching, 2009
This paper makes a study of the rater reliability in scoring composition in the test of English as a foreign language (EFL) and focuses on the inter-rater reliability as well as several interactions between raters and the other facets involved (that is examinees, rating criteria and rating methods). Results showed that raters were fairly…
Descriptors: Interrater Reliability, Scoring, Writing (Composition), English (Second Language)
Crehan, Kevin D. – 1997
Writing fits well within the realm of outcomes suitable for observation by performance assessments. Studies of the reliability of performance assessments have suggested that interrater reliability can be consistently high. Scoring consistency, however, is only one aspect of quality in decisions based on assessment results. Another is…
Descriptors: Evaluation Methods, Feedback, Generalizability Theory, Interrater Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Coniam, David – ReCALL, 2009
This paper describes a study of the computer essay-scoring program BETSY. While the use of computers in rating written scripts has been criticised in some quarters for lacking transparency or lack of fit with how human raters rate written scripts, a number of essay rating programs are available commercially, many of which claim to offer comparable…
Descriptors: Writing Tests, Scoring, Foreign Countries, Interrater Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Yong-Won; Kantor, Robert – International Journal of Testing, 2007
Possible integrated and independent tasks were pilot tested for the writing section of a new generation of the TOEFL[R] (Test of English as a Foreign Language[TM]). This study examines the impact of various rating designs and of the number of tasks and raters on the reliability of writing scores based on integrated and independent tasks from the…
Descriptors: Generalizability Theory, Writing Tests, English (Second Language), Second Language Learning
Wolfe, Edward W.; Kao, Chi-Wen – 1996
This paper reports the results of an analysis of the relationship between scorer behaviors and score variability. Thirty-six essay scorers were interviewed and asked to perform a think-aloud task as they scored 24 essays. Each comment made by a scorer was coded according to its content focus (i.e. appearance, assignment, mechanics, communication,…
Descriptors: Content Analysis, Educational Assessment, Essays, Evaluation Methods
Previous Page | Next Page ยป
Pages: 1  |  2