NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 4 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Seifried, Eva; Lenhard, Wolfgang; Baier, Herbert; Spinath, Birgit – Journal of Educational Computing Research, 2012
This study investigates the potential of a software tool based on Latent Semantic Analysis (LSA; Landauer, McNamara, Dennis, & Kintsch, 2007) to automatically evaluate complex German texts. A sample of N = 94 German university students provided written answers to questions that involved a high amount of analytical reasoning and evaluation.…
Descriptors: Foreign Countries, Computer Software, Computer Software Evaluation, Computer Uses in Education
Peer reviewed Peer reviewed
Direct linkDirect link
Ice, Phil; Swan, Karen; Diaz, Sebastian; Kupczynski, Lori; Swan-Dagen, Allison – Journal of Educational Computing Research, 2010
This article used work from the writing assessment literature to develop a framework for assessing the impact and perceived value of written, audio, and combined written and audio feedback strategies across four global and 22 discrete dimensions of feedback. Using a quasi-experimental research design, students at three U.S. universities were…
Descriptors: Feedback (Response), Writing Evaluation, Education Courses, Teacher Education Programs
Peer reviewed Peer reviewed
Lemaire, Benoit; Dessus, Philippe – Journal of Educational Computing Research, 2001
Describes Apex (Assistant for Preparing Exams), a tool for evaluating student essays based on their content. By comparing an essay and the text of a given course on a semantic basis, the system can measure how well the essay matches the text. Various assessments are presented to the student regarding the topic, outline, and coherence of the essay.…
Descriptors: Computer Assisted Testing, Computer Oriented Programs, Computer Uses in Education, Educational Technology
Peer reviewed Peer reviewed
Powers, Donald E.; Burstein, Jill C.; Chodorow, Martin S.; Fowles, Mary E.; Kukich, Karen – Journal of Educational Computing Research, 2002
Discusses the validity of automated, or computer-based, scoring for improving the cost effectiveness of performance assessments and describes a study that examined the relationship of scores from a graduate level writing assessment to several independent, non-test indicators of examinee's writing skills, both for automated scores and for scores…
Descriptors: Computer Uses in Education, Cost Effectiveness, Graduate Study, Intermode Differences