NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ebru Öztürk; Erol Duran – Educational Policy Analysis and Strategic Research, 2024
In this study, it was aimed to develop a rubric to evaluate the creative story writing skill levels of seventh grade secondary school students. The research was designed in quantitative research method and survey model. In the research, convenience sampling technique was used and 270 students studying at the seventh grade level of secondary school…
Descriptors: Scoring Rubrics, Writing Evaluation, Creative Writing, Middle School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Matta, Michael; Mercer, Sterett H.; Keller-Margulis, Milena A. – School Psychology, 2023
Recent advances in automated writing evaluation have enabled educators to use automated writing quality scores to improve assessment feasibility. However, there has been limited investigation of bias for automated writing quality scores with students from diverse racial or ethnic backgrounds. The use of biased scores could contribute to…
Descriptors: Bias, Automation, Writing Evaluation, Scoring
Michael Matta; Sterett H. Mercer; Milena A. Keller-Margulis – Grantee Submission, 2023
Recent advances in automated writing evaluation have enabled educators to use automated writing quality scores to improve assessment feasibility. However, there has been limited investigation of bias for automated writing quality scores with students from diverse racial or ethnic backgrounds. The use of biased scores could contribute to…
Descriptors: Bias, Automation, Writing Evaluation, Scoring
Carla Wood; Miguel Garcia-Salas; Christopher Schatschneider – Grantee Submission, 2023
Purpose: The aim of this study was to advance the analysis of written language transcripts by validating an automated scoring procedure using an automated open-access tool for calculating morphological complexity (MC) from written transcripts. Method: The MC of words in 146 written responses of students in fifth grade was assessed using two…
Descriptors: Automation, Computer Assisted Testing, Scoring, Computation
Michael Matta; Sterett H. Mercer; Milena A. Keller-Margulis – Grantee Submission, 2022
Written expression curriculum-based measurement (WE-CBM) is a formative assessment approach for screening and progress monitoring. To extend evaluation of WE-CBM, we compared hand-calculated and automated scoring approaches in relation to the number of screening samples needed per student for valid scores, the long-term predictive validity and…
Descriptors: Writing Evaluation, Writing Tests, Predictive Validity, Formative Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Michael Matta; Sterett H. Mercer; Milena A. Keller-Margulis – Assessment in Education: Principles, Policy & Practice, 2022
Written expression curriculum-based measurement (WE-CBM) is a formative assessment approach for screening and progress monitoring. To extend evaluation of WE-CBM, we compared hand-calculated and automated scoring approaches in relation to the number of screening samples needed per student for valid scores, the long-term predictive validity and…
Descriptors: Writing Evaluation, Writing Tests, Predictive Validity, Formative Evaluation
Gary A. Troia; Frank R. Lawrence; Julie S. Brehmer; Kaitlin Glause; Heather L. Reichmuth – Grantee Submission, 2023
Much of the research that has examined the writing knowledge of school-age students has relied on interviews to ascertain this information, which is problematic because interviews may underestimate breadth and depth of writing knowledge, require lengthy interactions with participants, and do not permit a direct evaluation of a prescribed array of…
Descriptors: Writing Tests, Writing Evaluation, Knowledge Level, Elementary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Rahimi, Zahra; Litman, Diane; Correnti, Richard; Wang, Elaine; Matsumura, Lindsay Clare – International Journal of Artificial Intelligence in Education, 2017
This paper presents an investigation of score prediction based on natural language processing for two targeted constructs within analytic text-based writing: 1) students' effective use of evidence and, 2) their organization of ideas and evidence in support of their claim. With the long-term goal of producing feedback for students and teachers, we…
Descriptors: Scoring, Automation, Scoring Rubrics, Natural Language Processing
Puranik, Cynthia S.; Patchan, Melissa M.; Sears, Mary M.; McMaster, Kristen L. – Assessment for Effective Intervention, 2017
Curriculum-based measures (CBMs) are necessary for educators to quickly assess student skill levels and monitor progress. This study examined the use of the alphabet writing fluency task, a CBM of writing, to assess handwriting fluency--that is, how well children access, retrieve, and write letter forms automatically. In the current study, the…
Descriptors: Kindergarten, Alphabets, Writing Skills, Time on Task
Puranik, Cynthia S.; Patchan, Melissa M.; Sears, Mary M.; McMaster, Kristen L. – Grantee Submission, 2017
Curriculum-based measures (CBMs) are necessary for educators to quickly assess student skill levels and monitor progress. This study examined the use of the alphabet writing fluency task, a CBM of writing, to assess handwriting fluency--that is, how well children access, retrieve, and write letter forms automatically. In the current study, the…
Descriptors: Kindergarten, Alphabets, Writing Skills, Time on Task