NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 1 to 15 of 49 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Yue Huang; Joshua Wilson – Journal of Computer Assisted Learning, 2025
Background: Automated writing evaluation (AWE) systems, used as formative assessment tools in writing classrooms, are promising for enhancing instruction and improving student performance. Although meta-analytic evidence supports AWE's effectiveness in various contexts, research on its effectiveness in the U.S. K-12 setting has lagged behind its…
Descriptors: Writing Evaluation, Writing Skills, Writing Tests, Writing Instruction
Yi Gui – ProQuest LLC, 2024
This study explores using transfer learning in machine learning for natural language processing (NLP) to create generic automated essay scoring (AES) models, providing instant online scoring for statewide writing assessments in K-12 education. The goal is to develop an instant online scorer that is generalizable to any prompt, addressing the…
Descriptors: Writing Tests, Natural Language Processing, Writing Evaluation, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
T.H.E. Journal, 2013
The West Virginia Department of Education's auto grading initiative dates back to 2004--a time when school districts were making their first forays into automation. The Charleston based WVDE had instituted a statewide writing assessment in 1984 for students in fourth, seventh, and 10th grades and was looking to expand that program without having…
Descriptors: Automation, Grading, Scoring, Computer Uses in Education
Peer reviewed Peer reviewed
Direct linkDirect link
Condon, William – Assessing Writing, 2013
Automated Essay Scoring (AES) has garnered a great deal of attention from the rhetoric and composition/writing studies community since the Educational Testing Service began using e-rater[R] and the "Criterion"[R] Online Writing Evaluation Service as products in scoring writing tests, and most of the responses have been negative. While the…
Descriptors: Measurement, Psychometrics, Evaluation Methods, Educational Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Olinghouse, Natalie G.; Zheng, Jinjie; Morlock, Larissa – Reading & Writing Quarterly, 2012
This study evaluated large-scale state writing assessments for the inclusion of motivational characteristics in the writing task and written prompt. We identified 6 motivational variables from the authentic activity literature: time allocation, audience specification, audience intimacy, definition of task, allowance for multiple perspectives, and…
Descriptors: Writing Evaluation, Writing Tests, Writing Achievement, Audiences
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Jinyan – Assessing Writing, 2012
Using generalizability (G-) theory, this study examined the accuracy and validity of the writing scores assigned to secondary school ESL students in the provincial English examinations in Canada. The major research question that guided this study was: Are there any differences between the accuracy and construct validity of the analytic scores…
Descriptors: Foreign Countries, Generalizability Theory, Writing Evaluation, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Gallagher, Chris W. – College Composition and Communication, 2011
I use Burkean analysis to show how neoliberalism undermines faculty assessment expertise and underwrites testing industry expertise in the current assessment scene. Contending that we cannot extricate ourselves from our limited agency in this scene until we abandon the familiar "stakeholder" theory of power, I propose a rewriting of the…
Descriptors: Writing Evaluation, Writing Tests, College Faculty, Political Attitudes
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Liao, Chen-Huei; Kuo, Bor-Chen; Pai, Kai-Chih – Turkish Online Journal of Educational Technology - TOJET, 2012
Automated scoring by means of Latent Semantic Analysis (LSA) has been introduced lately to improve the traditional human scoring system. The purposes of the present study were to develop a LSA-based assessment system to evaluate children's Chinese sentence construction skills and to examine the effectiveness of LSA-based automated scoring function…
Descriptors: Foreign Countries, Program Effectiveness, Scoring, Personality
Peer reviewed Peer reviewed
Direct linkDirect link
McCurry, Doug – Assessing Writing, 2010
This article considers the claim that machine scoring of writing test responses agrees with human readers as much as humans agree with other humans. These claims about the reliability of machine scoring of writing are usually based on specific and constrained writing tasks, and there is reason for asking whether machine scoring of writing requires…
Descriptors: Writing Tests, Scoring, Interrater Reliability, Computer Assisted Testing
Deane, Paul – Educational Testing Service, 2011
This paper presents a socio-cognitive framework for connecting writing pedagogy and writing assessment with modern social and cognitive theories of writing. It focuses on providing a general framework that highlights the connections between writing competency and other literacy skills; identifies key connections between literacy instruction,…
Descriptors: Writing (Composition), Writing Evaluation, Writing Tests, Cognitive Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Parr, Judy M.; Timperley, Helen S. – Assessing Writing, 2010
Traditionally, feedback to writing is written on drafts or given orally in roving or more formal conferences and is considered a significant part of instruction. This paper locates written response within an assessment for learning framework in the writing classroom. Within this framework, quality of response was defined in terms of providing…
Descriptors: Feedback (Response), Pedagogical Content Knowledge, Writing Evaluation, Writing Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Peterson, Shelley Stagg; McClay, Jill – Assessing Writing, 2010
This paper reports on the feedback and assessment practices of Canadian grades 4-8 teachers; the data are drawn from a national study of the teaching of writing at the middle grades in all ten Canadian provinces and two (of three) territories. Respondents were 216 grades 4-8 teachers from rural and urban schools. Data sources were audio-recorded…
Descriptors: Feedback (Response), Urban Schools, Writing Instruction, Elementary School Teachers
Peer reviewed Peer reviewed
Direct linkDirect link
Xu, Yun; Wu, Zunmin – Assessing Writing, 2012
This paper reports on a qualitative research study into the test-taking strategies employed in completing two picture prompt writing tasks--Situational Writing and Interpretational Writing in the Beijing Matriculation English Test. Think-aloud and retrospective interview protocols were collected from twelve Chinese students representing two key…
Descriptors: Foreign Countries, High School Students, Secondary School Teachers, Test Wiseness
National Assessment Governing Board, 2010
The purpose of the 2011 NAEP (National Assessment of Educational Progress) Writing Framework is to describe how the new NAEP Writing Assessment is designed to measure students' writing at grades 4, 8, and 12. As the ongoing national indicator of the academic achievement of students in the United States, NAEP regularly collects information on…
Descriptors: Writing Achievement, Writing Skills, Writing Evaluation, National Competency Tests
Dunn, David E. – ProQuest LLC, 2011
Many national reports indicate that more attention needs to be placed on writing and the teaching of writing in schools. The purpose of this quantitative study was to, first, examine the structure of the DWA and, second, to use the scores from the DWA to examine the relationship between ELL status and writing proficiency. Five major research…
Descriptors: Ethnicity, Writing Evaluation, Socioeconomic Status, Income
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4