NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Journal of Educational…12
Audience
Researchers1
Location
Massachusetts1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 12 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Xu; Ouyang, Fan; Liu, Jianwen; Wei, Chengkun; Chen, Wenzhi – Journal of Educational Computing Research, 2023
The computer-supported writing assessment (CSWA) has been widely used to reduce instructor workload and provide real-time feedback. Interpretability of CSWA draws extensive attention because it can benefit the validity, transparency, and knowledge-aware feedback of academic writing assessments. This study proposes a novel assessment tool,…
Descriptors: Computer Assisted Testing, Writing Evaluation, Feedback (Response), Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Thompson, Meredith Myra; Braude, Eric John – Journal of Educational Computing Research, 2016
The assessment of learning in large online courses requires tools that are valid, reliable, easy to administer, and can be automatically scored. We have evaluated an online assessment and learning tool called Knowledge Assembly, or Knowla. Knowla measures a student's knowledge in a particular subject by having the student assemble a set of…
Descriptors: Computer Assisted Testing, Teaching Methods, Online Courses, Critical Thinking
Peer reviewed Peer reviewed
Direct linkDirect link
Stowell, Jeffrey R.; Bennett, Dan – Journal of Educational Computing Research, 2010
Increased use of course management software to administer course exams online for face-to-face classes raises the question of how well test anxiety and other emotions generalize from the classroom to an online setting. We hypothesized that administering regular course exams in an online format would reduce test anxiety experienced at the time of…
Descriptors: Test Anxiety, Computer Assisted Testing, Computer Uses in Education, Educational Technology
Peer reviewed Peer reviewed
Mason, B. Jean; Patry, Marc; Berstein, Daniel J. – Journal of Educational Computing Research, 2001
Discussion of adapting traditional paper and pencil tests to electronic formats focuses on a study of undergraduates that examined the equivalence between computer-based and traditional tests when the computer testing provided opportunities comparable to paper testing conditions. Results showed no difference between scores from the two test types.…
Descriptors: Comparative Analysis, Computer Assisted Testing, Higher Education, Intermode Differences
Peer reviewed Peer reviewed
Direct linkDirect link
Bodmann, Shawn M.; Robinson, Daniel H. – Journal of Educational Computing Research, 2004
This study investigated the effect of several different modes of test administration on scores and completion times. In Experiment 1, paper-based assessment was compared to computer-based assessment. Undergraduates completed the computer-based assessment faster than the paper-based assessment, with no difference in scores. Experiment 2 assessed…
Descriptors: Computer Assisted Testing, Higher Education, Undergraduate Students, Evaluation Methods
Peer reviewed Peer reviewed
Jacobs, Ronald L.; And Others – Journal of Educational Computing Research, 1985
This study adapted the Hidden Figures Test for use on PLATO and determined the reliability of the computerized version compared to the paper and pencil version. Results indicate the test was successfully adapted with some modifications, and it was judged reliable although it may be measuring additional constructs. (MBR)
Descriptors: Computer Assisted Testing, Educational Research, Field Dependence Independence, Higher Education
Peer reviewed Peer reviewed
Ward, Thomas J., Jr.; And Others – Journal of Educational Computing Research, 1989
Discussion of computer-assisted testing focuses on a study of college students that investigated whether a computerized test which incorporated traditional test taking interfaces had any effect on students' performance, anxiety level, or attitudes toward the computer. Results indicate no difference in performance but a significant difference in…
Descriptors: Academic Achievement, Comparative Analysis, Computer Assisted Testing, Higher Education
Peer reviewed Peer reviewed
Vogel, Lora Ann – Journal of Educational Computing Research, 1994
Reports on a study conducted to evaluate how individual differences in anxiety levels affect performance on computer versus paper-and-pencil forms of verbal sections of the Graduate Record Examination. Contrary to the research hypothesis, analysis of scores revealed that extroverted and less computer anxious subjects scored significantly lower on…
Descriptors: Comparative Analysis, Computer Anxiety, Computer Assisted Testing, Computer Attitudes
Peer reviewed Peer reviewed
Shermis, Mark D.; Mzumara, Howard R.; Bublitz, Scott T. – Journal of Educational Computing Research, 2001
This study of undergraduates examined differences between computer adaptive testing (CAT) and self-adaptive testing (SAT), including feedback conditions and gender differences. Results of the Test Anxiety Inventory, Computer Anxiety Rating Scale, and a Student Attitude Questionnaire showed measurement efficiency is differentially affected by test…
Descriptors: Adaptive Testing, Computer Anxiety, Computer Assisted Testing, Gender Issues
Peer reviewed Peer reviewed
Frick, Theodore W. – Journal of Educational Computing Research, 1989
Demonstrates how Bayesian reasoning can be used to adjust the length of computer-guided practice exercises and computer-based tests to help make mastery or nonmastery decisions. Individualization of instruction is discussed, and the results of an empirical study that used the sequential probability ratio test (SPRT) are presented. (25 references)…
Descriptors: Adaptive Testing, Computer Assisted Instruction, Computer Assisted Testing, Higher Education
Peer reviewed Peer reviewed
Frick, Theodore W. – Journal of Educational Computing Research, 1992
Discussion of expert systems and computerized adaptive tests describes two versions of EXSPRT, a new approach that combines uncertain inference in expert systems with sequential probability ratio test (SPRT) stopping rules. Results of two studies comparing EXSPRT to adaptive mastery testing based on item response theory and SPRT approaches are…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Expert Systems
Peer reviewed Peer reviewed
Direct linkDirect link
Clariana, Roy B.; Wallace, Patricia – Journal of Educational Computing Research, 2007
This proof-of-concept investigation describes a computer-based approach for deriving the knowledge structure of individuals and of groups from their written essays, and considers the convergent criterion-related validity of the computer-based scores relative to human rater essay scores and multiple-choice test scores. After completing a…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Construct Validity, Cognitive Structures