NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 1 to 15 of 18 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ali Al-Barakat; Rommel AlAli; Omayya Al-Hassan; Khaled Al-Saud – Educational Process: International Journal, 2025
Background/purpose: The study tries to discover how predictive thinking can be incorporated into writing activities to assist students in developing their creative skills in writing learning environments. Through this study, teachers will be able to adopt a new teaching method that helps transform the way creative writing is taught in language…
Descriptors: Thinking Skills, Creative Writing, Writing Instruction, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Sari, Elif; Han, Turgay – Reading Matrix: An International Online Journal, 2021
Providing both effective feedback applications and reliable assessment practices are two central issues in ESL/EFL writing instruction contexts. Giving individual feedback is very difficult in crowded classes as it requires a great amount of time and effort for instructors. Moreover, instructors likely employ inconsistent assessment procedures,…
Descriptors: Automation, Writing Evaluation, Artificial Intelligence, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
McGrane, Joshua Aaron; Humphry, Stephen Mark; Heldsinger, Sandra – Applied Measurement in Education, 2018
National standardized assessment programs have increasingly included extended written performances, amplifying the need for reliable, valid, and efficient methods of assessment. This article examines a two-stage method using comparative judgments and calibrated exemplars as a complement and alternative to existing methods of assessing writing.…
Descriptors: Standardized Tests, Foreign Countries, Writing Tests, Writing Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Steedle, Jeffrey T.; Ferrara, Steve – Applied Measurement in Education, 2016
As an alternative to rubric scoring, comparative judgment generates essay scores by aggregating decisions about the relative quality of the essays. Comparative judgment eliminates certain scorer biases and potentially reduces training requirements, thereby allowing a large number of judges, including teachers, to participate in essay evaluation.…
Descriptors: Essays, Scoring, Comparative Analysis, Evaluators
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kural, Faruk – Journal of Language and Linguistic Studies, 2018
The present paper, which is a study based on midterm exam results of 53 University English prep-school students, examines correlation between a direct writing test, measured holistically by multiple-trait scoring, and two indirect writing tests used in a competence exam, one of which is a multiple-choice cloze test and the other a rewrite test…
Descriptors: Writing Evaluation, Cloze Procedure, Comparative Analysis, Essays
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M. – ETS Research Report Series, 2015
Automated scoring models were trained and evaluated for the essay task in the "Praxis I"® writing test. Prompt-specific and generic "e-rater"® scoring models were built, and evaluation statistics, such as quadratic weighted kappa, Pearson correlation, and standardized differences in mean scores, were examined to evaluate the…
Descriptors: Writing Tests, Licensing Examinations (Professions), Teacher Competency Testing, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Humphry, Stephen M.; McGrane, Joshua A. – Australian Educational Researcher, 2015
This paper presents a method for equating writing assessments using pairwise comparisons which does not depend upon conventional common-person or common-item equating designs. Pairwise comparisons have been successfully applied in the assessment of open-ended tasks in English and other areas such as visual art and philosophy. In this paper,…
Descriptors: Writing Evaluation, Evaluation Methods, Comparative Analysis, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Wyse, Adam E.; Bunch, Michael B.; Deville, Craig; Viger, Steven G. – Educational and Psychological Measurement, 2014
This article describes a novel variation of the Body of Work method that uses construct maps to overcome problems of transparency, rater inconsistency, and scores gaps commonly occurring with the Body of Work method. The Body of Work method with construct maps was implemented to set cut-scores for two separate K-12 assessment programs in a large…
Descriptors: Standard Setting (Scoring), Educational Assessment, Elementary Secondary Education, Measurement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bejar, Isaac I.; VanWinkle, Waverely; Madnani, Nitin; Lewis, William; Steier, Michael – ETS Research Report Series, 2013
The paper applies a natural language computational tool to study a potential construct-irrelevant response strategy, namely the use of "shell language." Although the study is motivated by the impending increase in the volume of scoring of students responses from assessments to be developed in response to the Race to the Top initiative,…
Descriptors: Responses, Language Usage, Natural Language Processing, Computational Linguistics
Peer reviewed Peer reviewed
Direct linkDirect link
Huot, Brian; O'Neill, Peggy; Moore, Cindy – College English, 2010
Writing program administrators and other composition specialists need to know the history of writing assessment in order to create a rich and responsible culture of it today. In its first fifty years, the field of writing assessment followed educational measurement in general by focusing on issues of reliability, whereas in its next fifty years,…
Descriptors: Writing (Composition), Writing Evaluation, Writing Tests, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Espin, Christine; Wallace, Teri; Campbell, Heather; Lembke, Erica S.; Long, Jeffrey D.; Ticha, Renata – Exceptional Children, 2008
We examined the technical adequacy of writing progress measures as indicators of success on state standards tests. Tenth-grade students wrote for 10 min, marking their samples at 3, 5, and 7 min. Samples were scored for words written, words spelled correctly, and correct and correct minus incorrect word sequences. The number of correct minus…
Descriptors: Curriculum Based Assessment, State Standards, Second Language Learning, Writing (Composition)
Peer reviewed Peer reviewed
Direct linkDirect link
Shermis, Mark D.; Long, Susanne K. – Journal of Psychoeducational Assessment, 2009
This study investigated the convergent and discriminant validity of the high-stakes Florida Comprehensive Assessment Test (FCAT) in both reading and writing at grade levels 4, 8, and 10. The data from the 2006 FCAT administration were analyzed via traditional multitrait-multimethod (MTMM) analysis to identify the factor structure and structural…
Descriptors: Structural Equation Models, Multitrait Multimethod Techniques, Writing Tests, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Massa, Jacqueline; Gomes, Hilary; Tartter, Vivien; Wolfson, Virginia; Halperin, Jeffrey M. – International Journal of Language & Communication Disorders, 2008
Background: Research has shown that early identification of children with language issues is critical for effective intervention, and yet many children are not identified until school age. The use of parent-completed rating scales, especially in urban, minority populations, might improve early identification if parent ratings are found to be…
Descriptors: Speech Communication, Language Impairments, Reading Tests, Validity
Lee, Yong-Won; Kantor, Robert; Mollaun, Pam – 2002
This paper reports the results of generalizability theory (G) analyses done for new writing and speaking tasks for the Test of English as a Foreign Language (TOEFL). For writing, a special focus was placed on evaluating the impact on the reliability of the number of raters (or ratings) per essay (one or two) and the number of tasks (one, two, or…
Descriptors: English (Second Language), Generalizability Theory, Reliability, Scores
Taylor, Catherine S. – 1999
This document contains the technical information for the 1998 Washington Assessment of Student Learning (WASL), Grade 4 Assessment for Reading, Mathematics, Listening, and Writing. It documents the technical quality of the assessment, including the evidence for the reliability and validity of test scores. The manual's chapters are: (1)…
Descriptors: Grade 4, Intermediate Grades, Listening Skills, Mathematics Tests
Previous Page | Next Page »
Pages: 1  |  2