Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 6 |
Descriptor
Source
Journal of Educational… | 10 |
Author
Clariana, Roy B. | 2 |
Bao, Lina | 1 |
Bejar, Isaac I. | 1 |
Burstein, Jill C. | 1 |
Chen, Binbin | 1 |
Chen, Wenzhi | 1 |
Cheng, Yan | 1 |
Chodorow, Martin S. | 1 |
Craig, Scotty D. | 1 |
Fors, Uno G. H. | 1 |
Fowles, Mary E. | 1 |
More ▼ |
Publication Type
Journal Articles | 10 |
Reports - Research | 6 |
Reports - Descriptive | 4 |
Reports - Evaluative | 1 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 3 |
Postsecondary Education | 3 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Chen, Binbin; Bao, Lina; Zhang, Rui; Zhang, Jingyu; Liu, Feng; Wang, Shuai; Li, Mingjiang – Journal of Educational Computing Research, 2024
Language learning has increasingly benefited from Computer-Assisted Language Learning (CALL) technologies, especially with Artificial Intelligence involved in recent years. CALL in writing learning acknowledged as the core of language learning is being realized by technologies like Automated Writing Evaluation (AWE), and Automated Essay Scoring…
Descriptors: Computer Assisted Instruction, English (Second Language), Second Language Learning, Writing Instruction
Li, Xu; Ouyang, Fan; Liu, Jianwen; Wei, Chengkun; Chen, Wenzhi – Journal of Educational Computing Research, 2023
The computer-supported writing assessment (CSWA) has been widely used to reduce instructor workload and provide real-time feedback. Interpretability of CSWA draws extensive attention because it can benefit the validity, transparency, and knowledge-aware feedback of academic writing assessments. This study proposes a novel assessment tool,…
Descriptors: Computer Assisted Testing, Writing Evaluation, Feedback (Response), Natural Language Processing
Qian, Leyi; Zhao, Yali; Cheng, Yan – Journal of Educational Computing Research, 2020
Automated writing scoring can not only provide holistic scores but also instant and corrective feedback on L2 learners' writing quality. It has been increasing in use throughout China and internationally. Given the advantages, the past several years has witnessed the emergence and growth of writing evaluation products in China. To the best of our…
Descriptors: Foreign Countries, Automation, Scoring, Writing (Composition)
Fors, Uno G. H.; Gunning, William T. – Journal of Educational Computing Research, 2014
Virtual patient cases (VPs) are used for healthcare education and assessment. Most VP systems track user interactions to be used for assessment. Few studies have investigated how virtual exam cases should be scored and graded. We have applied eight different scoring models on a data set from 154 students. Issues studied included the impact of…
Descriptors: Scoring Rubrics, Health Education, Evaluation Methods, Case Method (Teaching Technique)
Twyford, Jessica; Craig, Scotty D. – Journal of Educational Computing Research, 2017
Observational tutoring has been found to be an effective method for teaching a variety of subjects by reusing dialogue from previous successful tutoring sessions. While it has been shown content can be learned through observational tutoring, it has yet to been examined if a secondary behavior such as goal setting can be influenced. The present…
Descriptors: Pretests Posttests, Physics, Science Instruction, Teaching Methods
Koul, Ravinder; Clariana, Roy B.; Salehi, Roya – Journal of Educational Computing Research, 2005
This article reports the results of an investigation of the convergent criterion-related validity of two computer-based tools for scoring concept maps and essays as part of the ongoing formative evaluation of these tools. In pairs, participants researched a science topic online and created a concept map of the topic. Later, participants…
Descriptors: Scoring, Essay Tests, Test Validity, Formative Evaluation

Harasym, Peter H.; And Others – Journal of Educational Computing Research, 1993
Discussion of the use of human markers to mark responses on write-in questions focuses on a study that determined the feasibility of using a computer program to mark write-in responses for the Medical Council of Canada Qualifying Examination. The computer performance was compared with that of physician markers. (seven references) (LRW)
Descriptors: Comparative Analysis, Computer Assisted Testing, Computer Software Development, Computer Software Evaluation

Whalen, Sean J.; Bejar, Isaac I. – Journal of Educational Computing Research, 1998
Discusses the design of online scoring software for computer-based education supporting high psychometric and fairness standards. Describes a system from the point of view of graders and supervisors; the design of the underlying database; data integrity and confidentiality; and the advantages of database design and online grading. (PEN)
Descriptors: Computer Assisted Instruction, Computer Software Development, Confidentiality, Databases

Powers, Donald E.; Burstein, Jill C.; Chodorow, Martin S.; Fowles, Mary E.; Kukich, Karen – Journal of Educational Computing Research, 2002
Discusses the validity of automated, or computer-based, scoring for improving the cost effectiveness of performance assessments and describes a study that examined the relationship of scores from a graduate level writing assessment to several independent, non-test indicators of examinee's writing skills, both for automated scores and for scores…
Descriptors: Computer Uses in Education, Cost Effectiveness, Graduate Study, Intermode Differences
Clariana, Roy B.; Wallace, Patricia – Journal of Educational Computing Research, 2007
This proof-of-concept investigation describes a computer-based approach for deriving the knowledge structure of individuals and of groups from their written essays, and considers the convergent criterion-related validity of the computer-based scores relative to human rater essay scores and multiple-choice test scores. After completing a…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Construct Validity, Cognitive Structures