NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Reese Butterfuss; Rod D. Roscoe; Laura K. Allen; Kathryn S. McCarthy; Danielle S. McNamara – Journal of Educational Computing Research, 2022
The present study examined the extent to which adaptive feedback and just-in-time writing strategy instruction improved the quality of high school students' persuasive essays in the context of the Writing Pal (W-Pal). W-Pal is a technology-based writing tool that integrates automated writing evaluation into an intelligent tutoring system. Students…
Descriptors: High School Students, Writing Evaluation, Writing Instruction, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
Qian, Leyi; Zhao, Yali; Cheng, Yan – Journal of Educational Computing Research, 2020
Automated writing scoring can not only provide holistic scores but also instant and corrective feedback on L2 learners' writing quality. It has been increasing in use throughout China and internationally. Given the advantages, the past several years has witnessed the emergence and growth of writing evaluation products in China. To the best of our…
Descriptors: Foreign Countries, Automation, Scoring, Writing (Composition)
Peer reviewed Peer reviewed
Direct linkDirect link
Seifried, Eva; Lenhard, Wolfgang; Spinath, Birgit – Journal of Educational Computing Research, 2017
Writing essays and receiving feedback can be useful for fostering students' learning and motivation. When faced with large class sizes, it is desirable to identify students who might particularly benefit from feedback. In this article, we tested the potential of Latent Semantic Analysis (LSA) for identifying poor essays. A total of 14 teaching…
Descriptors: Computer Assisted Testing, Computer Software, Essays, Writing Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Seifried, Eva; Lenhard, Wolfgang; Baier, Herbert; Spinath, Birgit – Journal of Educational Computing Research, 2012
This study investigates the potential of a software tool based on Latent Semantic Analysis (LSA; Landauer, McNamara, Dennis, & Kintsch, 2007) to automatically evaluate complex German texts. A sample of N = 94 German university students provided written answers to questions that involved a high amount of analytical reasoning and evaluation.…
Descriptors: Foreign Countries, Computer Software, Computer Software Evaluation, Computer Uses in Education
Peer reviewed Peer reviewed
Direct linkDirect link
Kelly, P. Adam – Journal of Educational Computing Research, 2005
Powers, Burstein, Chodorow, Fowles, and Kukich (2002) suggested that automated essay scoring (AES) may benefit from the use of "general" scoring models designed to score essays irrespective of the prompt for which an essay was written. They reasoned that such models may enhance score credibility by signifying that an AES system measures the same…
Descriptors: Essays, Models, Writing Evaluation, Validity
Peer reviewed Peer reviewed
Lemaire, Benoit; Dessus, Philippe – Journal of Educational Computing Research, 2001
Describes Apex (Assistant for Preparing Exams), a tool for evaluating student essays based on their content. By comparing an essay and the text of a given course on a semantic basis, the system can measure how well the essay matches the text. Various assessments are presented to the student regarding the topic, outline, and coherence of the essay.…
Descriptors: Computer Assisted Testing, Computer Oriented Programs, Computer Uses in Education, Educational Technology
Peer reviewed Peer reviewed
Wolfe, Edward W.; And Others – Journal of Educational Computing Research, 1996
Investigates how word processing experience influences tenth grade student performance on a writing assessment. Examines factors influencing a student's decision about using word processors for writing; relationship between experience with the technology and scores on word processed essays; and differences in length, neatness, mechanical…
Descriptors: Academic Achievement, Computer Attitudes, Essays, Grade 10
Peer reviewed Peer reviewed
Direct linkDirect link
Miller, Tristan – Journal of Educational Computing Research, 2003
Latent semantic analysis (LSA) is an automated, statistical technique for comparing the semantic similarity of words or documents. In this article, I examine the application of LSA to automated essay scoring. I compare LSA methods to earlier statistical methods for assessing essay quality, and critically review contemporary essay-scoring systems…
Descriptors: Semantics, Test Scoring Machines, Essays, Semantic Differential