NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 316 to 330 of 5,691 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Keller-Margulis, Milena A.; Mercer, Sterett H.; Matta, Michael – Reading and Writing: An Interdisciplinary Journal, 2021
Existing approaches to measuring writing performance are insufficient in terms of both technical adequacy as well as feasibility for use as a screening measure. This study examined the validity and diagnostic accuracy of several approaches to automated text evaluation as well as written expression curriculum-based measurement (WE-CBM) to determine…
Descriptors: Writing Evaluation, Validity, Automation, Curriculum Based Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Romig, John Elwood; Olsen, Amanda A. – Reading & Writing Quarterly, 2021
Compared to other content areas, there is a dearth of research examining curriculum-based measurement of writing (CBM-W). This study conducted a conceptual replication examining the reliability, stability, and sensitivity to growth of slopes produced from CBM-W. Eighty-nine (N = 89) eighth-grade students responded to one CBM-W probe weekly for 11…
Descriptors: Curriculum Based Assessment, Writing Evaluation, Middle School Students, Grade 8
Peer reviewed Peer reviewed
Direct linkDirect link
Uto, Masaki; Okano, Masashi – IEEE Transactions on Learning Technologies, 2021
In automated essay scoring (AES), scores are automatically assigned to essays as an alternative to grading by humans. Traditional AES typically relies on handcrafted features, whereas recent studies have proposed AES models based on deep neural networks to obviate the need for feature engineering. Those AES models generally require training on a…
Descriptors: Essays, Scoring, Writing Evaluation, Item Response Theory
Keller-Margulis, Milena A.; Mercer, Sterett H.; Matta, Michael – Grantee Submission, 2021
Existing approaches to measuring writing performance are insufficient in terms of both technical adequacy as well as feasibility for use as a screening measure. This study examined the validity and diagnostic accuracy of several approaches to automated text evaluation as well as written expression curriculum-based measurement (WE-CBM) to determine…
Descriptors: Writing Evaluation, Validity, Automation, Curriculum Based Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Wenting Chen; Meixiu Zhang – Language Awareness, 2024
While much research supports the benefits of computer-mediated collaborative writing (CW) in second language (L2) classrooms, the assessment of CW has received scant attention. This study proposed an assessment scheme considering both the products and processes when assessing online synchronous CW, and explored its effects on learners'…
Descriptors: Foreign Countries, College Students, English (Second Language), English Language Learners
Peer reviewed Peer reviewed
Direct linkDirect link
Ray J. T. Liao; Renka Ohta; Kwangmin Lee – Language Testing, 2024
As integrated writing tasks in large-scale and classroom-based writing assessments have risen in popularity, research studies have increasingly concentrated on providing validity evidence. Given the fact that most of these studies focus on adult second language learners rather than younger ones, this study examined the relationship between written…
Descriptors: Writing (Composition), Writing Evaluation, English Language Learners, Discourse Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Linqian Ding; Di Zou – Education and Information Technologies, 2024
With the burgeoning popularity and swift advancements of automated writing evaluation (AWE) systems in language classrooms, scholarly and practical interest in this area has noticeably increased. This systematic review aims to comprehensively investigate current research on three prominent AWE systems: Grammarly, Pigai, and Criterion. Objectives…
Descriptors: Automation, Writing Evaluation, Literature Reviews, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Xiaolong Cheng; Lawrence Jun Zhang – Asia-Pacific Education Researcher, 2024
While studies on teacher written feedback and automated writing evaluation (AWE) feedback have proliferated in recent decades, little attention has been paid to how AWE-teacher integrated feedback would influence students' engagement and their writing performance in second language (L2) writing. Against this backdrop, a quasi-experimental design…
Descriptors: Foreign Countries, English (Second Language), Second Language Learning, Second Language Instruction
Sayed Ali Reza Ahmadi – ProQuest LLC, 2024
The purpose of this study is to investigate how the US-based First-Year Composition (FYC) instructors understand and facilitate metacognition in their classes and assess students' metacognition through exploratory, mixed methods approaches. I argue that even if we understand the importance of metacognition generally for student populations, we…
Descriptors: Metacognition, Writing Instruction, Teaching Methods, Writing Teachers
Peer reviewed Peer reviewed
Direct linkDirect link
Hamdollah Ravand; Farshad Effatpanah; Wenchao Ma; Jimmy de la Torre; Purya Baghaei; Olga Kunina-Habenicht – Applied Measurement in Education, 2024
The purpose of this study was to explore the nature of interactions among second/foreign language (L2) writing subskills. Two types of relationships were investigated: subskill-item and subskill-subskill relationships. To achieve the first purpose, using writing data obtained from the writing essays of 500 English as a foreign language (EFL)…
Descriptors: Second Language Learning, Writing Instruction, Writing Skills, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Yishen Song; Qianta Zhu; Huaibo Wang; Qinhua Zheng – IEEE Transactions on Learning Technologies, 2024
Manually scoring and revising student essays has long been a time-consuming task for educators. With the rise of natural language processing techniques, automated essay scoring (AES) and automated essay revising (AER) have emerged to alleviate this burden. However, current AES and AER models require large amounts of training data and lack…
Descriptors: Scoring, Essays, Writing Evaluation, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Susan Lang; Clinton Morrison Jr.; Kathleen Brawley – Writing Center Journal, 2024
What do writers do with the feedback they receive? While the answer will vary depending on the writer's experience and the rhetorical situation, understanding what writers do can provide important information for course redesign and professional development of tutors and instructors. In this first of two manuscripts, the authors examine how…
Descriptors: Laboratories, Writing (Composition), Writing Instruction, Tutoring
Peer reviewed Peer reviewed
Direct linkDirect link
Anastasia Tzirides; Gabriela Zapata; Patrick Bolger; Bill Cope; Mary Kalantzis; Duane Searsmith – International Journal on E-Learning, 2024
This paper explores the integration of Generative Artificial Intelligence (GenAI) feedback into higher education. Specifically, it examines the views of 11 experienced instructors on fine-tuned GenAI formative feedback of student works in an online graduate program in the United States. The participants assessed sample GenAI reviews, and their…
Descriptors: Artificial Intelligence, Computer Software, Learning Experience, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
Shahzad, Areeba; Wali, Aamir – Education and Information Technologies, 2022
Checking essays written by students is a very time consuming task. Besides spellings and grammar, they also need to be evaluated on their semantic content such as cohesion, coherence, etc. In this study we focus on one such aspect of semantic content which is the topic of the essay. Putting it formally, given a prompt or essay-statement and an…
Descriptors: Computer Uses in Education, Essays, Writing Evaluation, Semantics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Polat, Murat; Turhan, Nihan S.; Toraman, Cetin – Pegem Journal of Education and Instruction, 2022
Testing English writing skills could be multi-dimensional; thus, the study aimed to compare students' writing scores calculated according to Classical Test Theory (CTT) and Multi-Facet Rasch Model (MFRM). The research was carried out in 2019 with 100 university students studying at a foreign language preparatory class and four experienced…
Descriptors: Comparative Analysis, Test Theory, Item Response Theory, Student Evaluation
Pages: 1  |  ...  |  18  |  19  |  20  |  21  |  22  |  23  |  24  |  25  |  26  |  ...  |  380