Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 10 |
Descriptor
Computer Assisted Testing | 13 |
Educational Technology | 13 |
Interrater Reliability | 13 |
Computer Software | 7 |
Evaluation Methods | 6 |
Comparative Analysis | 5 |
Scoring | 5 |
Student Evaluation | 5 |
Computer Software Evaluation | 4 |
Correlation | 4 |
Essay Tests | 4 |
More ▼ |
Source
Author
Bell, John F. | 1 |
Bhola, Dennison S. | 1 |
Buckendahl, Chad W. | 1 |
Chang, Chun-Yen | 1 |
Clariana, Roy B. | 1 |
Coniam, David | 1 |
Cordier, Deborah | 1 |
Garcia, Veronica | 1 |
Greyling, Jean H. | 1 |
Johnson, Martin | 1 |
Joordens, S. | 1 |
More ▼ |
Publication Type
Journal Articles | 10 |
Reports - Evaluative | 5 |
Reports - Research | 5 |
Collected Works - Proceedings | 1 |
Dissertations/Theses -… | 1 |
Reports - Descriptive | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Higher Education | 6 |
Postsecondary Education | 5 |
Elementary Secondary Education | 3 |
Secondary Education | 2 |
Audience
Location
Asia | 1 |
Australia | 1 |
Brazil | 1 |
Connecticut | 1 |
Denmark | 1 |
Egypt | 1 |
Estonia | 1 |
Florida | 1 |
Germany | 1 |
Greece | 1 |
Hawaii | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Graduate Management Admission… | 1 |
What Works Clearinghouse Rating
Marking Essays on Screen: An Investigation into the Reliability of Marking Extended Subjective Texts
Johnson, Martin; Nadas, Rita; Bell, John F. – British Journal of Educational Technology, 2010
There is a growing body of research literature that considers how the mode of assessment, either computer-based or paper-based, might affect candidates' performances. Despite this, there is a fairly narrow literature that shifts the focus of attention to those making assessment judgements and which considers issues of assessor consistency when…
Descriptors: English Literature, Examiners, Evaluation Research, Evaluators
Naude, Kevin A.; Greyling, Jean H.; Vogts, Dieter – Computers & Education, 2010
We present a novel approach to the automated marking of student programming assignments. Our technique quantifies the structural similarity between unmarked student submissions and marked solutions, and is the basis by which we assign marks. This is accomplished through an efficient novel graph similarity measure ("AssignSim"). Our experiments…
Descriptors: Grading, Assignments, Correlation, Interrater Reliability
Coniam, David – ReCALL, 2009
This paper describes a study of the computer essay-scoring program BETSY. While the use of computers in rating written scripts has been criticised in some quarters for lacking transparency or lack of fit with how human raters rate written scripts, a number of essay rating programs are available commercially, many of which claim to offer comparable…
Descriptors: Writing Tests, Scoring, Foreign Countries, Interrater Reliability
Wang, Hao-Chuan; Chang, Chun-Yen; Li, Tsai-Yen – Computers & Education, 2008
The work aims to improve the assessment of creative problem-solving in science education by employing language technologies and computational-statistical machine learning methods to grade students' natural language responses automatically. To evaluate constructs like creative problem-solving with validity, open-ended questions that elicit…
Descriptors: Interrater Reliability, Earth Science, Problem Solving, Grading
Cordier, Deborah – ProQuest LLC, 2009
A renewed focus on foreign language (FL) learning and speech for communication has resulted in computer-assisted language learning (CALL) software developed with Automatic Speech Recognition (ASR). ASR features for FL pronunciation (Lafford, 2004) are functional components of CALL designs used for FL teaching and learning. The ASR features…
Descriptors: Feedback (Response), Computer Assisted Instruction, Validity, Computer Software
Pare, D. E.; Joordens, S. – Journal of Computer Assisted Learning, 2008
As class sizes increase, methods of assessments shift from costly traditional approaches (e.g. expert-graded writing assignments) to more economic and logistically feasible methods (e.g. multiple-choice testing, computer-automated scoring, or peer assessment). While each method of assessment has its merits, it is peer assessment in particular,…
Descriptors: Writing Assignments, Undergraduate Students, Teaching Assistants, Peer Evaluation
Wen, Meichun Lydia; Tsai, Chin-Chung – Teaching in Higher Education, 2008
Online or web-based peer assessment is a valuable and effective way to help the learner to examine his or her learning progress, and teachers need to be familiar with the practice before they use it in their classrooms. Therefore, the purpose of our study was to design an online peer assessment activity for 37 inservice science and mathematics…
Descriptors: Teacher Education Curriculum, Education Courses, Peer Evaluation, Research Methodology
Rudner, Lawrence M.; Garcia, Veronica; Welch, Catherine – Journal of Technology, Learning, and Assessment, 2006
This report provides a two-part evaluation of the IntelliMetric[SM] automated essay scoring system based on its performance scoring essays from the Analytic Writing Assessment of the Graduate Management Admission Test[TM] (GMAT[TM]). The IntelliMetric system performance is first compared to that of individual human raters, a Bayesian system…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays
Clariana, Roy B.; Wallace, Patricia – Journal of Educational Computing Research, 2007
This proof-of-concept investigation describes a computer-based approach for deriving the knowledge structure of individuals and of groups from their written essays, and considers the convergent criterion-related validity of the computer-based scores relative to human rater essay scores and multiple-choice test scores. After completing a…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Construct Validity, Cognitive Structures
Solano-Flores, Guillermo; Raymond, Bruce; Schneider, Steven A. – 1997
The need for effective ways of monitoring the quality of scoring of portfolios resulted in the development of a software package that provides scoring leaders with updated information on their assessors' scoring quality. Assessors with computers enter data as they score, and this information is analyzed and reported to scoring leaders. The…
Descriptors: Art Teachers, Computer Assisted Testing, Computer Software, Computer Software Evaluation
Lee, H. K. – Assessing Writing, 2004
This study aimed to comprehensively investigate the impact of a word-processor on an ESL writing assessment, covering comparison of inter-rater reliability, the quality of written products, the writing process across different testing occasions using different writing media, and students' perception of a computer-delivered test. Writing samples of…
Descriptors: Writing Evaluation, Student Attitudes, Writing Tests, Testing
Yang, Yongwei; Buckendahl, Chad W.; Juszkiewicz, Piotr J.; Bhola, Dennison S. – Journal of Applied Testing Technology, 2005
With the continual progress of computer technologies, computer automated scoring (CAS) has become a popular tool for evaluating writing assessments. Research of applications of these methodologies to new types of performance assessments is still emerging. While research has generally shown a high agreement of CAS system generated scores with those…
Descriptors: Scoring, Validity, Interrater Reliability, Comparative Analysis
International Association for Development of the Information Society, 2012
The IADIS CELDA 2012 Conference intention was to address the main issues concerned with evolving learning processes and supporting pedagogies and applications in the digital age. There had been advances in both cognitive psychology and computing that have affected the educational arena. The convergence of these two disciplines is increasing at a…
Descriptors: Academic Achievement, Academic Persistence, Academic Support Services, Access to Computers