NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Cleophas, Catherine; Hönnige, Christoph; Meisel, Frank; Meyer, Philipp – INFORMS Transactions on Education, 2023
As the COVID-19 pandemic motivated a shift to virtual teaching, exams have increasingly moved online too. Detecting cheating through collusion is not easy when tech-savvy students take online exams at home and on their own devices. Such online at-home exams may tempt students to collude and share materials and answers. However, online exams'…
Descriptors: Computer Assisted Testing, Cheating, Identification, Essay Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Rupp, André A. – Applied Measurement in Education, 2018
This article discusses critical methodological design decisions for collecting, interpreting, and synthesizing empirical evidence during the design, deployment, and operational quality-control phases for automated scoring systems. The discussion is inspired by work on operational large-scale systems for automated essay scoring but many of the…
Descriptors: Design, Automation, Scoring, Test Scoring Machines
Westhuizen, Duan vd – Commonwealth of Learning, 2016
This work starts with a brief overview of education in developing countries, to contextualise the use of the guidelines. Although this document is intended to be a practical tool, it is necessary to include some theoretical analysis of the concept of online assessment. This is given in Sections 3 and 4, together with the identification and…
Descriptors: Guidelines, Student Evaluation, Computer Assisted Testing, Evaluation Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wolf, Kenneth; Dunlap, Joanna; Stevens, Ellen – Journal of Effective Teaching, 2012
This article describes ten key assessment practices for advancing student learning that all professors should be familiar with and strategically incorporate in their classrooms and programs. Each practice or concept is explained with examples and guidance for putting it into practice. The ten are: learning outcomes, performance assessments,…
Descriptors: Educational Assessment, Student Evaluation, Educational Practices, Outcomes of Education
Peer reviewed Peer reviewed
Direct linkDirect link
Koul, Ravinder; Clariana, Roy B.; Salehi, Roya – Journal of Educational Computing Research, 2005
This article reports the results of an investigation of the convergent criterion-related validity of two computer-based tools for scoring concept maps and essays as part of the ongoing formative evaluation of these tools. In pairs, participants researched a science topic online and created a concept map of the topic. Later, participants…
Descriptors: Scoring, Essay Tests, Test Validity, Formative Evaluation
Lee, Yong-Won – 2001
An essay test is now an integral part of the computer based Test of English as a Foreign Language (TOEFL-CBT). This paper provides a brief overview of the current TOEFL-CBT essay test, describes the operational procedures for essay scoring, including the Online Scoring Network (OSN) of the Educational Testing Service (ETS), and discusses major…
Descriptors: Computer Assisted Testing, English (Second Language), Essay Tests, Interrater Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Clariana, Roy B.; Wallace, Patricia – Journal of Educational Computing Research, 2007
This proof-of-concept investigation describes a computer-based approach for deriving the knowledge structure of individuals and of groups from their written essays, and considers the convergent criterion-related validity of the computer-based scores relative to human rater essay scores and multiple-choice test scores. After completing a…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Construct Validity, Cognitive Structures
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dikli, Semire – Journal of Technology, Learning, and Assessment, 2006
Automated Essay Scoring (AES) is defined as the computer technology that evaluates and scores the written prose (Shermis & Barrera, 2002; Shermis & Burstein, 2003; Shermis, Raymat, & Barrera, 2003). AES systems are mainly used to overcome time, cost, reliability, and generalizability issues in writing assessment (Bereiter, 2003; Burstein,…
Descriptors: Scoring, Writing Evaluation, Writing Tests, Standardized Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Burstein, Jill – Journal of Technology, Learning, and Assessment, 2006
E-rater[R] has been used by the Educational Testing Service for automated essay scoring since 1999. This paper describes a new version of e-rater (V.2) that is different from other automated essay scoring systems in several important respects. The main innovations of e-rater V.2 are a small, intuitive, and meaningful set of features used for…
Descriptors: Educational Testing, Test Scoring Machines, Scoring, Writing Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bray, Dorothy, Ed.; Belcher, Marcia J., Ed. – New Directions for Community Colleges, 1987
Three aspects of student assessment are addressed in this collection of essays: accountability issues and the political tensions that they reflect; assessment practices, the use and misuse of testing, and emerging directions; and the impact of assessment. The collection includes: (1) "Expansion, Quality, and Testing in American…
Descriptors: Access to Education, Community Colleges, Computer Assisted Testing, Educational Technology