NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Stowell, Jeffrey R.; Bennett, Dan – Journal of Educational Computing Research, 2010
Increased use of course management software to administer course exams online for face-to-face classes raises the question of how well test anxiety and other emotions generalize from the classroom to an online setting. We hypothesized that administering regular course exams in an online format would reduce test anxiety experienced at the time of…
Descriptors: Test Anxiety, Computer Assisted Testing, Computer Uses in Education, Educational Technology
Peer reviewed Peer reviewed
Direct linkDirect link
McNulty, John; Chandrasekhar, Arcot; Hoyt, Amy; Gruener, Gregory; Espiritu, Baltazar; Price, Ron, Jr. – Journal of Educational Computing Research, 2011
This report summarizes more than a decade of experiences with implementing computer-based testing across a 4-year medical curriculum. Practical considerations are given to the fields incorporated within an item database and their use in the creation and analysis of examinations, security issues in the delivery and integrity of examinations,…
Descriptors: Educational Research, Testing, Integrity, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Chang, Chi-Cheng – Journal of Educational Computing Research, 2009
The purpose of this study was to explore the self-evaluated effects of a web-based portfolio assessment system on various categories of students of motivation. The subjects for this study were the students of two computer classes in a Junior High School. The experimental group used the web-based portfolio assessment system whereas the control…
Descriptors: Portfolios (Background Materials), Experimental Groups, Control Groups, Portfolio Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Bodmann, Shawn M.; Robinson, Daniel H. – Journal of Educational Computing Research, 2004
This study investigated the effect of several different modes of test administration on scores and completion times. In Experiment 1, paper-based assessment was compared to computer-based assessment. Undergraduates completed the computer-based assessment faster than the paper-based assessment, with no difference in scores. Experiment 2 assessed…
Descriptors: Computer Assisted Testing, Higher Education, Undergraduate Students, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Pomplun, Mark; Ritchie, Timothy – Journal of Educational Computing Research, 2004
This study investigated the statistical and practical significance of context effects for items randomized within testlets for administration during a series of computerized non-adaptive tests. One hundred and twenty-five items from four primary school reading tests were studied. Logistic regression analyses identified from one to four items for…
Descriptors: Psychometrics, Context Effect, Effect Size, Primary Education
Peer reviewed Peer reviewed
Direct linkDirect link
Riedel, Eric; Dexter, Sara L.; Scharber, Cassandra; Doering, Aaron – Journal of Educational Computing Research, 2006
Research on computer-based writing evaluation has only recently focused on the potential for providing formative feedback rather than summative assessment. This study tests the impact of an automated essay scorer (AES) that provides formative feedback on essay drafts written as part of a series of online teacher education case studies. Seventy…
Descriptors: Preservice Teacher Education, Writing Evaluation, Case Studies, Formative Evaluation
Peer reviewed Peer reviewed
Harasym, Peter H.; And Others – Journal of Educational Computing Research, 1993
Discussion of the use of human markers to mark responses on write-in questions focuses on a study that determined the feasibility of using a computer program to mark write-in responses for the Medical Council of Canada Qualifying Examination. The computer performance was compared with that of physician markers. (seven references) (LRW)
Descriptors: Comparative Analysis, Computer Assisted Testing, Computer Software Development, Computer Software Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Miller, Tristan – Journal of Educational Computing Research, 2003
Latent semantic analysis (LSA) is an automated, statistical technique for comparing the semantic similarity of words or documents. In this article, I examine the application of LSA to automated essay scoring. I compare LSA methods to earlier statistical methods for assessing essay quality, and critically review contemporary essay-scoring systems…
Descriptors: Semantics, Test Scoring Machines, Essays, Semantic Differential