NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Journal of Educational…18
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 18 results Save | Export
Reese Butterfuss; Rod D. Roscoe; Laura K. Allen; Kathryn S. McCarthy; Danielle S. McNamara – Journal of Educational Computing Research, 2022
The present study examined the extent to which adaptive feedback and just-in-time writing strategy instruction improved the quality of high school students' persuasive essays in the context of the Writing Pal (W-Pal). W-Pal is a technology-based writing tool that integrates automated writing evaluation into an intelligent tutoring system. Students…
Descriptors: High School Students, Writing Evaluation, Writing Instruction, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Xu; Ouyang, Fan; Liu, Jianwen; Wei, Chengkun; Chen, Wenzhi – Journal of Educational Computing Research, 2023
The computer-supported writing assessment (CSWA) has been widely used to reduce instructor workload and provide real-time feedback. Interpretability of CSWA draws extensive attention because it can benefit the validity, transparency, and knowledge-aware feedback of academic writing assessments. This study proposes a novel assessment tool,…
Descriptors: Computer Assisted Testing, Writing Evaluation, Feedback (Response), Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Qian, Leyi; Zhao, Yali; Cheng, Yan – Journal of Educational Computing Research, 2020
Automated writing scoring can not only provide holistic scores but also instant and corrective feedback on L2 learners' writing quality. It has been increasing in use throughout China and internationally. Given the advantages, the past several years has witnessed the emergence and growth of writing evaluation products in China. To the best of our…
Descriptors: Foreign Countries, Automation, Scoring, Writing (Composition)
Peer reviewed Peer reviewed
Direct linkDirect link
Mohsen, Mohammed Ali – Journal of Educational Computing Research, 2022
Written corrective feedback for improving L2 writing skills has been a debatable issue for more than two decades. The aims of this meta-analysis are to (1) provide a quantitative measure of the effect of computer-generated written feedback for improving L2 writing skills and (2) verify how moderators (i.e., adopted technology, task types, and…
Descriptors: Computer Assisted Instruction, Teaching Methods, Second Language Learning, Second Language Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Zhai, Na; Ma, Xiaomei – Journal of Educational Computing Research, 2023
Automated writing evaluation (AWE) has been frequently used to provide feedback on student writing. Many empirical studies have examined the effectiveness of AWE on writing quality, but the results were inconclusive. Thus, the magnitude of AWE's overall effect and factors influencing its effectiveness across studies remained unclear. This study…
Descriptors: Writing Evaluation, Feedback (Response), Meta Analysis, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Wilson, Joshua; Roscoe, Rod D. – Journal of Educational Computing Research, 2020
The present study extended research on the effectiveness of automated writing evaluation (AWE) systems. Sixth graders were randomly assigned by classroom to an AWE condition that used "Project Essay Grade Writing" (n = 56) or a word-processing condition that used Google Docs (n = 58). Effectiveness was evaluated using multiple metrics:…
Descriptors: Automation, Writing Evaluation, Feedback (Response), Instructional Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Seifried, Eva; Lenhard, Wolfgang; Spinath, Birgit – Journal of Educational Computing Research, 2017
Writing essays and receiving feedback can be useful for fostering students' learning and motivation. When faced with large class sizes, it is desirable to identify students who might particularly benefit from feedback. In this article, we tested the potential of Latent Semantic Analysis (LSA) for identifying poor essays. A total of 14 teaching…
Descriptors: Computer Assisted Testing, Computer Software, Essays, Writing Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Park, Juhwa; Cho, Kwangsu – Journal of Educational Computing Research, 2017
Previous research has shown the effectiveness of peer reviewing on the improvement of writing quality. However, the fact that students themselves, arguably novices, judged the improvement leads to concerns about the validity of peer reviewing. We measured writing quality before and after peer reviewing using Coh-Metrix, which computationally…
Descriptors: Peer Evaluation, Computational Linguistics, Connected Discourse, Writing Improvement
Peer reviewed Peer reviewed
Direct linkDirect link
Seifried, Eva; Lenhard, Wolfgang; Baier, Herbert; Spinath, Birgit – Journal of Educational Computing Research, 2012
This study investigates the potential of a software tool based on Latent Semantic Analysis (LSA; Landauer, McNamara, Dennis, & Kintsch, 2007) to automatically evaluate complex German texts. A sample of N = 94 German university students provided written answers to questions that involved a high amount of analytical reasoning and evaluation.…
Descriptors: Foreign Countries, Computer Software, Computer Software Evaluation, Computer Uses in Education
Peer reviewed Peer reviewed
Direct linkDirect link
Ice, Phil; Swan, Karen; Diaz, Sebastian; Kupczynski, Lori; Swan-Dagen, Allison – Journal of Educational Computing Research, 2010
This article used work from the writing assessment literature to develop a framework for assessing the impact and perceived value of written, audio, and combined written and audio feedback strategies across four global and 22 discrete dimensions of feedback. Using a quasi-experimental research design, students at three U.S. universities were…
Descriptors: Feedback (Response), Writing Evaluation, Education Courses, Teacher Education Programs
Peer reviewed Peer reviewed
Barker, Randolph T.; Pearce, C. Glenn – Journal of Educational Computing Research, 1995
Analyzed 17 personal attributes of 160 undergraduate students who wrote reports on a computer or by hand, and compared differences in the quality of computer and handwritten reports with each student's personal attributes. Concludes that some attributes do relate to computer writing quality. (JMV)
Descriptors: Comparative Analysis, Handwriting, Higher Education, Personality Traits
Peer reviewed Peer reviewed
Direct linkDirect link
Kelly, P. Adam – Journal of Educational Computing Research, 2005
Powers, Burstein, Chodorow, Fowles, and Kukich (2002) suggested that automated essay scoring (AES) may benefit from the use of "general" scoring models designed to score essays irrespective of the prompt for which an essay was written. They reasoned that such models may enhance score credibility by signifying that an AES system measures the same…
Descriptors: Essays, Models, Writing Evaluation, Validity
Peer reviewed Peer reviewed
Lemaire, Benoit; Dessus, Philippe – Journal of Educational Computing Research, 2001
Describes Apex (Assistant for Preparing Exams), a tool for evaluating student essays based on their content. By comparing an essay and the text of a given course on a semantic basis, the system can measure how well the essay matches the text. Various assessments are presented to the student regarding the topic, outline, and coherence of the essay.…
Descriptors: Computer Assisted Testing, Computer Oriented Programs, Computer Uses in Education, Educational Technology
Peer reviewed Peer reviewed
Powers, Donald E.; Burstein, Jill C.; Chodorow, Martin S.; Fowles, Mary E.; Kukich, Karen – Journal of Educational Computing Research, 2002
Discusses the validity of automated, or computer-based, scoring for improving the cost effectiveness of performance assessments and describes a study that examined the relationship of scores from a graduate level writing assessment to several independent, non-test indicators of examinee's writing skills, both for automated scores and for scores…
Descriptors: Computer Uses in Education, Cost Effectiveness, Graduate Study, Intermode Differences
Peer reviewed Peer reviewed
Direct linkDirect link
Riedel, Eric; Dexter, Sara L.; Scharber, Cassandra; Doering, Aaron – Journal of Educational Computing Research, 2006
Research on computer-based writing evaluation has only recently focused on the potential for providing formative feedback rather than summative assessment. This study tests the impact of an automated essay scorer (AES) that provides formative feedback on essay drafts written as part of a series of online teacher education case studies. Seventy…
Descriptors: Preservice Teacher Education, Writing Evaluation, Case Studies, Formative Evaluation
Previous Page | Next Page ยป
Pages: 1  |  2