NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 16 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Zhen; Cao, Yang; Gong, Shaoying – Journal of Educational Computing Research, 2023
Although learner characteristics have been identified as important moderator variables for feedback effectiveness, the question of why learners benefit differently from feedback has only received limited attention. In this study, we investigated: (1) whether learners' dominant goal orientation moderated the effects of computer-based elaborated…
Descriptors: Goal Orientation, Feedback (Response), Cues, Student Characteristics
Peer reviewed Peer reviewed
Direct linkDirect link
Rosen, Yigal; Tager, Maryam – Journal of Educational Computing Research, 2014
Major educational initiatives in the world place great emphasis on fostering rich computer-based environments of assessment that make student thinking and reasoning visible. Using thinking tools engages students in a variety of critical and complex thinking, such as evaluating, analyzing, and decision making. The aim of this study was to explore…
Descriptors: Critical Thinking, Concept Mapping, Computer Assisted Testing, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Kurby, Christopher A.; Magliano, Joseph P.; Dandotkar, Srikanth; Woehrle, James; Gilliam, Sara; McNamara, Danielle S. – Journal of Educational Computing Research, 2012
This study assessed whether and how self-explanation reading training, provided by iSTART (Interactive Strategy Training for Active Reading and Thinking), improves the effectiveness of comprehension processes. iSTART teaches students how to self-explain and which strategies will most effectively aid comprehension from moment-to-moment. We used…
Descriptors: Computer Assisted Testing, Federal Aid, Control Groups, Experimental Groups
Peer reviewed Peer reviewed
Powers, Donald E. – Journal of Educational Computing Research, 2001
Tests the hypothesis that the introduction of computer-adaptive testing may help to alleviate test anxiety and diminish the relationship between test anxiety and test performance. Compares a sample of Graduate Record Examinations (GRE) General Test takers who took the computer-adaptive version of the test with another sample who took the…
Descriptors: Comparative Analysis, Computer Assisted Testing, Nonprint Media, Performance
Peer reviewed Peer reviewed
Mason, B. Jean; Patry, Marc; Berstein, Daniel J. – Journal of Educational Computing Research, 2001
Discussion of adapting traditional paper and pencil tests to electronic formats focuses on a study of undergraduates that examined the equivalence between computer-based and traditional tests when the computer testing provided opportunities comparable to paper testing conditions. Results showed no difference between scores from the two test types.…
Descriptors: Comparative Analysis, Computer Assisted Testing, Higher Education, Intermode Differences
Peer reviewed Peer reviewed
Direct linkDirect link
Kelly, P. Adam – Journal of Educational Computing Research, 2005
Powers, Burstein, Chodorow, Fowles, and Kukich (2002) suggested that automated essay scoring (AES) may benefit from the use of "general" scoring models designed to score essays irrespective of the prompt for which an essay was written. They reasoned that such models may enhance score credibility by signifying that an AES system measures the same…
Descriptors: Essays, Models, Writing Evaluation, Validity
Peer reviewed Peer reviewed
Ward, Thomas J., Jr.; And Others – Journal of Educational Computing Research, 1989
Discussion of computer-assisted testing focuses on a study of college students that investigated whether a computerized test which incorporated traditional test taking interfaces had any effect on students' performance, anxiety level, or attitudes toward the computer. Results indicate no difference in performance but a significant difference in…
Descriptors: Academic Achievement, Comparative Analysis, Computer Assisted Testing, Higher Education
Peer reviewed Peer reviewed
Direct linkDirect link
Pomplun, Mark; Custer, Michael – Journal of Educational Computing Research, 2005
This study investigated the equivalence of scores from computerized and paper-and-pencil formats of a series of K-3 reading screening tests. Concerns about score equivalence on the computerized formats were warranted because of the use of reading passages, computer unfamiliarity of primary school students, and teacher versus computer…
Descriptors: Screening Tests, Reading Tests, Family Income, Factor Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Koul, Ravinder; Clariana, Roy B.; Salehi, Roya – Journal of Educational Computing Research, 2005
This article reports the results of an investigation of the convergent criterion-related validity of two computer-based tools for scoring concept maps and essays as part of the ongoing formative evaluation of these tools. In pairs, participants researched a science topic online and created a concept map of the topic. Later, participants…
Descriptors: Scoring, Essay Tests, Test Validity, Formative Evaluation
Peer reviewed Peer reviewed
Vogel, Lora Ann – Journal of Educational Computing Research, 1994
Reports on a study conducted to evaluate how individual differences in anxiety levels affect performance on computer versus paper-and-pencil forms of verbal sections of the Graduate Record Examination. Contrary to the research hypothesis, analysis of scores revealed that extroverted and less computer anxious subjects scored significantly lower on…
Descriptors: Comparative Analysis, Computer Anxiety, Computer Assisted Testing, Computer Attitudes
Peer reviewed Peer reviewed
Applegate, Brooks – Journal of Educational Computing Research, 1993
Describes a study that was conducted to explore how kindergarten and second-grade students are able to structure and solve geometric analogy problems in a computer-based test and to compare the results to a paper-and-pencil test format. Use of the Test of Analogical Reasoning in Children is described. (18 references) (LRW)
Descriptors: Academically Gifted, Comparative Analysis, Computer Assisted Testing, Geometric Concepts
Peer reviewed Peer reviewed
Frick, Theodore W. – Journal of Educational Computing Research, 1992
Discussion of expert systems and computerized adaptive tests describes two versions of EXSPRT, a new approach that combines uncertain inference in expert systems with sequential probability ratio test (SPRT) stopping rules. Results of two studies comparing EXSPRT to adaptive mastery testing based on item response theory and SPRT approaches are…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Expert Systems
Peer reviewed Peer reviewed
Lalley, James P. – Journal of Educational Computing Research, 1998
Compares the effectiveness of textual feedback to video feedback during two computer-assisted biology lessons administered to secondary students. Lessons consisted of a brief text introduction followed by multiple-choice questions with text or video feedback. Findings indicated that video feedback resulted in superior learning and comprehension,…
Descriptors: Comparative Analysis, Computer Assisted Instruction, Computer Assisted Testing, Feedback
Peer reviewed Peer reviewed
Harasym, Peter H.; And Others – Journal of Educational Computing Research, 1993
Discussion of the use of human markers to mark responses on write-in questions focuses on a study that determined the feasibility of using a computer program to mark write-in responses for the Medical Council of Canada Qualifying Examination. The computer performance was compared with that of physician markers. (seven references) (LRW)
Descriptors: Comparative Analysis, Computer Assisted Testing, Computer Software Development, Computer Software Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Miller, Tristan – Journal of Educational Computing Research, 2003
Latent semantic analysis (LSA) is an automated, statistical technique for comparing the semantic similarity of words or documents. In this article, I examine the application of LSA to automated essay scoring. I compare LSA methods to earlier statistical methods for assessing essay quality, and critically review contemporary essay-scoring systems…
Descriptors: Semantics, Test Scoring Machines, Essays, Semantic Differential
Previous Page | Next Page ยป
Pages: 1  |  2