NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Type
Reports - Research20
Journal Articles17
Speeches/Meeting Papers1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 20 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ute Mertens; Marlit A. Lindner – Journal of Computer Assisted Learning, 2025
Background: Educational assessments increasingly shift towards computer-based formats. Many studies have explored how different types of automated feedback affect learning. However, few studies have investigated how digital performance feedback affects test takers' ratings of affective-motivational reactions during a testing session. Method: In…
Descriptors: Educational Assessment, Computer Assisted Testing, Automation, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
Cui, Ying; Guo, Qi; Leighton, Jacqueline P.; Chu, Man-Wai – International Journal of Testing, 2020
This study explores the use of the Adaptive Neuro-Fuzzy Inference System (ANFIS), a neuro-fuzzy approach, to analyze the log data of technology-based assessments to extract relevant features of student problem-solving processes, and develop and refine a set of fuzzy logic rules that could be used to interpret student performance. The log data that…
Descriptors: Inferences, Artificial Intelligence, Data Analysis, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Kolski, Tammi; Weible, Jennifer – Online Journal of Distance Learning Administration, 2018
With increased pressures on maintaining a stellar academic performance for future academia or occupational possibilities, students may suffer test anxiety at some point in their higher education journey. For decades, empirical, observational, research has been conducted to determine the psychological and physiological effects of test anxiety. This…
Descriptors: College Students, Test Anxiety, Video Technology, Supervision
Peer reviewed Peer reviewed
Direct linkDirect link
Ha, Minsu; Nehm, Ross H. – Journal of Science Education and Technology, 2016
Automated computerized scoring systems (ACSSs) are being increasingly used to analyze text in many educational settings. Nevertheless, the impact of misspelled words (MSW) on scoring accuracy remains to be investigated in many domains, particularly jargon-rich disciplines such as the life sciences. Empirical studies confirm that MSW are a…
Descriptors: Spelling, Case Studies, Computer Uses in Education, Test Scoring Machines
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Swiggett, Wanda D.; Kotloff, Laurie; Ezzo, Chelsea; Adler, Rachel; Oliveri, Maria Elena – ETS Research Report Series, 2014
The computer-based "Graduate Record Examinations"® ("GRE"®) revised General Test includes interactive item types and testing environment tools (e.g., test navigation, on-screen calculator, and help). How well do test takers understand these innovations? If test takers do not understand the new item types, these innovations may…
Descriptors: College Entrance Examinations, Graduate Study, Usability, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Thompson, James J.; Yang, Tong; Chauvin, Sheila W. – Applied Measurement in Education, 2009
In some professions, speed and accuracy are as important as acquired requisite knowledge and skills. The availability of computer-based testing now facilitates examination of these two important aspects of student performance. We found that student response times in a conventional non-speeded multiple-choice test, at both the global and individual…
Descriptors: Reaction Time, Test Items, Student Reaction, Multiple Choice Tests
Scharber, Cassandra; Dexter, Sara; Riedel, Eric – Journal of Technology, Learning, and Assessment, 2008
The purpose of this research is to analyze preservice teachers' use of and reactions to an automated essay scorer used within an online, case-based learning environment called ETIPS. Data analyzed include post-assignment surveys, a user log of students' actions within the cases, instructor-assigned scores on final essays, and interviews with four…
Descriptors: Test Scoring Machines, Essays, Student Experience, Automation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kay, Robin H.; Knaack, Liesel – Canadian Journal of Learning and Technology, 2009
The purpose of this study was to examine individual differences in attitudes toward Audience Response Systems (ARSs) in secondary school classrooms. Specifically, the impact of gender, grade, subject area, computer comfort level, participation level, and type of use were examined in 659 students. Males had significantly more positive attitudes…
Descriptors: Audience Response, Gender Differences, Secondary School Students, Feedback (Response)
Peer reviewed Peer reviewed
Hansen, Jo-Ida C.; Neuman, Jody L.; Haverkamp, Beth E.; Lubinski, Barbara R. – Measurement and Evaluation in Counseling and Development, 1997
Examined user reaction to computer-administered and paper-and-pencil-administered forms of the Strong Interest Inventory. Results indicate that user reactions to the two administration modes were reasonably similar in most areas. However, the computer group indicated more often that their version was easier to use and follow. (RJM)
Descriptors: College Students, Computer Assisted Testing, Higher Education, Interest Inventories
Peer reviewed Peer reviewed
Shermis, Mark D.; Lombard, Danielle – Computers in Human Behavior, 1998
Examines the degree to which computer and test anxiety have a predictive role in performance across three computer-administered placement tests. Subjects (72 undergraduate students) were measured with the Computer Anxiety Rating Scale, the Test Anxiety Inventory, and the Myers-Briggs Type Indicator. Results suggest that much of what is considered…
Descriptors: Computer Anxiety, Computer Assisted Testing, Computer Attitudes, Computer Literacy
Peer reviewed Peer reviewed
Heppner, Frank H.; And Others – Journal of Reading, 1985
Reports that reading performance on a standardized test is better when the text is displayed in print, rather than on a computer display screen. (HOD)
Descriptors: Comparative Analysis, Computer Assisted Testing, Higher Education, Reading Comprehension
Peer reviewed Peer reviewed
Shin, Jongho; Deno, Stanley L.; Robinson, Steven L.; Marston, Douglas – Remedial and Special Education, 2000
The predictive validity of active responding on a computer-based groupware system was examined with 48 second graders. Results showed that active responding correlated highly with initial and final performance measures and that active responding contributed significantly to predicting final performance when initial performance was controlled.…
Descriptors: Computer Assisted Testing, Computer Uses in Education, Grade 2, Performance Factors
Peer reviewed Peer reviewed
Gretes, John A.; Green, Michael – Journal of Research on Computing in Education, 2000
Reports on the use of computerized practice tests in an undergraduate education course. In study one, students who took computerized practice exams averaged one-half letter grade higher than students who did not take computerized practice exams, and they exhibited positive attitudes toward their practice experiences. Study two replicated study one…
Descriptors: Academic Achievement, Computer Assisted Testing, Computer Uses in Education, Evaluation Methods
Peer reviewed Peer reviewed
Fuchs, Lynn S.; And Others – Education and Treatment of Children, 1989
Nineteen learning-disabled and 10 emotionally disturbed pupils, aged 8-15, were assigned randomly to computer-assisted curriculum-based measurement (CBM), pen and paper CBM, and contrast groups. CBM students exhibited more specific knowledge of their spelling goals and felt they received more frequent and more graphic displays of performance…
Descriptors: Computer Assisted Testing, Elementary Education, Emotional Disturbances, Evaluation Methods
Peer reviewed Peer reviewed
Bull, Joanna; Stephens, Derek – Innovations in Education and Training International, 1999
Describes two methods of introducing computer-assisted assessment (CAA) using Question Mark software. The University of Luton uses CAA for summative assessment; Loughborough University uses CAA mainly for formative assessment. Both have central units to provide support and development for staff wishing to use CAA in instruction. Students at each…
Descriptors: Computer Assisted Testing, Computer Software, Computer Uses in Education, Educational Assessment
Previous Page | Next Page »
Pages: 1  |  2