NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 5 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Zhai, Na; Ma, Xiaomei – Journal of Educational Computing Research, 2023
Automated writing evaluation (AWE) has been frequently used to provide feedback on student writing. Many empirical studies have examined the effectiveness of AWE on writing quality, but the results were inconclusive. Thus, the magnitude of AWE's overall effect and factors influencing its effectiveness across studies remained unclear. This study…
Descriptors: Writing Evaluation, Feedback (Response), Meta Analysis, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Seifried, Eva; Lenhard, Wolfgang; Baier, Herbert; Spinath, Birgit – Journal of Educational Computing Research, 2012
This study investigates the potential of a software tool based on Latent Semantic Analysis (LSA; Landauer, McNamara, Dennis, & Kintsch, 2007) to automatically evaluate complex German texts. A sample of N = 94 German university students provided written answers to questions that involved a high amount of analytical reasoning and evaluation.…
Descriptors: Foreign Countries, Computer Software, Computer Software Evaluation, Computer Uses in Education
Peer reviewed Peer reviewed
Barker, Randolph T.; Pearce, C. Glenn – Journal of Educational Computing Research, 1995
Analyzed 17 personal attributes of 160 undergraduate students who wrote reports on a computer or by hand, and compared differences in the quality of computer and handwritten reports with each student's personal attributes. Concludes that some attributes do relate to computer writing quality. (JMV)
Descriptors: Comparative Analysis, Handwriting, Higher Education, Personality Traits
Peer reviewed Peer reviewed
Direct linkDirect link
Kelly, P. Adam – Journal of Educational Computing Research, 2005
Powers, Burstein, Chodorow, Fowles, and Kukich (2002) suggested that automated essay scoring (AES) may benefit from the use of "general" scoring models designed to score essays irrespective of the prompt for which an essay was written. They reasoned that such models may enhance score credibility by signifying that an AES system measures the same…
Descriptors: Essays, Models, Writing Evaluation, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Miller, Tristan – Journal of Educational Computing Research, 2003
Latent semantic analysis (LSA) is an automated, statistical technique for comparing the semantic similarity of words or documents. In this article, I examine the application of LSA to automated essay scoring. I compare LSA methods to earlier statistical methods for assessing essay quality, and critically review contemporary essay-scoring systems…
Descriptors: Semantics, Test Scoring Machines, Essays, Semantic Differential