Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 4 |
Descriptor
Computer Assisted Testing | 8 |
Comparative Analysis | 4 |
Evaluation Methods | 4 |
Scoring | 4 |
Writing Evaluation | 4 |
Scoring Rubrics | 3 |
Computer Software | 2 |
Concept Mapping | 2 |
Educational Technology | 2 |
Essay Tests | 2 |
Essays | 2 |
More ▼ |
Source
Journal of Educational… | 8 |
Author
Clariana, Roy B. | 2 |
Chen, Wenzhi | 1 |
Dexter, Sara L. | 1 |
Doering, Aaron | 1 |
Graham, Charles R. | 1 |
Harasym, Peter H. | 1 |
Kelly, P. Adam | 1 |
Koul, Ravinder | 1 |
Li, Xu | 1 |
Liu, Jianwen | 1 |
Miller, Tristan | 1 |
More ▼ |
Publication Type
Journal Articles | 8 |
Reports - Descriptive | 3 |
Reports - Evaluative | 3 |
Reports - Research | 3 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 4 |
Postsecondary Education | 3 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Li, Xu; Ouyang, Fan; Liu, Jianwen; Wei, Chengkun; Chen, Wenzhi – Journal of Educational Computing Research, 2023
The computer-supported writing assessment (CSWA) has been widely used to reduce instructor workload and provide real-time feedback. Interpretability of CSWA draws extensive attention because it can benefit the validity, transparency, and knowledge-aware feedback of academic writing assessments. This study proposes a novel assessment tool,…
Descriptors: Computer Assisted Testing, Writing Evaluation, Feedback (Response), Natural Language Processing
Koul, Ravinder; Clariana, Roy B.; Salehi, Roya – Journal of Educational Computing Research, 2005
This article reports the results of an investigation of the convergent criterion-related validity of two computer-based tools for scoring concept maps and essays as part of the ongoing formative evaluation of these tools. In pairs, participants researched a science topic online and created a concept map of the topic. Later, participants…
Descriptors: Scoring, Essay Tests, Test Validity, Formative Evaluation

Harasym, Peter H.; And Others – Journal of Educational Computing Research, 1993
Discussion of the use of human markers to mark responses on write-in questions focuses on a study that determined the feasibility of using a computer program to mark write-in responses for the Medical Council of Canada Qualifying Examination. The computer performance was compared with that of physician markers. (seven references) (LRW)
Descriptors: Comparative Analysis, Computer Assisted Testing, Computer Software Development, Computer Software Evaluation
Graham, Charles R.; Tripp, Tonya; Wentworth, Nancy – Journal of Educational Computing Research, 2009
This study explores the efforts at Brigham Young University to improve preservice candidates' technology integration using the Teacher Work Sample (TWS) as an assessment tool. Baseline data that was analyzed from 95 TWSs indicated that students were predominantly using technology for productivity and information presentation purposes even though…
Descriptors: Field Instruction, Work Sample Tests, Technology Integration, Educational Technology
Kelly, P. Adam – Journal of Educational Computing Research, 2005
Powers, Burstein, Chodorow, Fowles, and Kukich (2002) suggested that automated essay scoring (AES) may benefit from the use of "general" scoring models designed to score essays irrespective of the prompt for which an essay was written. They reasoned that such models may enhance score credibility by signifying that an AES system measures the same…
Descriptors: Essays, Models, Writing Evaluation, Validity
Riedel, Eric; Dexter, Sara L.; Scharber, Cassandra; Doering, Aaron – Journal of Educational Computing Research, 2006
Research on computer-based writing evaluation has only recently focused on the potential for providing formative feedback rather than summative assessment. This study tests the impact of an automated essay scorer (AES) that provides formative feedback on essay drafts written as part of a series of online teacher education case studies. Seventy…
Descriptors: Preservice Teacher Education, Writing Evaluation, Case Studies, Formative Evaluation
Miller, Tristan – Journal of Educational Computing Research, 2003
Latent semantic analysis (LSA) is an automated, statistical technique for comparing the semantic similarity of words or documents. In this article, I examine the application of LSA to automated essay scoring. I compare LSA methods to earlier statistical methods for assessing essay quality, and critically review contemporary essay-scoring systems…
Descriptors: Semantics, Test Scoring Machines, Essays, Semantic Differential
Clariana, Roy B.; Wallace, Patricia – Journal of Educational Computing Research, 2007
This proof-of-concept investigation describes a computer-based approach for deriving the knowledge structure of individuals and of groups from their written essays, and considers the convergent criterion-related validity of the computer-based scores relative to human rater essay scores and multiple-choice test scores. After completing a…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Construct Validity, Cognitive Structures