NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 29 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ute Mertens; Marlit A. Lindner – Journal of Computer Assisted Learning, 2025
Background: Educational assessments increasingly shift towards computer-based formats. Many studies have explored how different types of automated feedback affect learning. However, few studies have investigated how digital performance feedback affects test takers' ratings of affective-motivational reactions during a testing session. Method: In…
Descriptors: Educational Assessment, Computer Assisted Testing, Automation, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
Cui, Ying; Guo, Qi; Leighton, Jacqueline P.; Chu, Man-Wai – International Journal of Testing, 2020
This study explores the use of the Adaptive Neuro-Fuzzy Inference System (ANFIS), a neuro-fuzzy approach, to analyze the log data of technology-based assessments to extract relevant features of student problem-solving processes, and develop and refine a set of fuzzy logic rules that could be used to interpret student performance. The log data that…
Descriptors: Inferences, Artificial Intelligence, Data Analysis, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Kolski, Tammi; Weible, Jennifer – Online Journal of Distance Learning Administration, 2018
With increased pressures on maintaining a stellar academic performance for future academia or occupational possibilities, students may suffer test anxiety at some point in their higher education journey. For decades, empirical, observational, research has been conducted to determine the psychological and physiological effects of test anxiety. This…
Descriptors: College Students, Test Anxiety, Video Technology, Supervision
Peer reviewed Peer reviewed
Direct linkDirect link
Ha, Minsu; Nehm, Ross H. – Journal of Science Education and Technology, 2016
Automated computerized scoring systems (ACSSs) are being increasingly used to analyze text in many educational settings. Nevertheless, the impact of misspelled words (MSW) on scoring accuracy remains to be investigated in many domains, particularly jargon-rich disciplines such as the life sciences. Empirical studies confirm that MSW are a…
Descriptors: Spelling, Case Studies, Computer Uses in Education, Test Scoring Machines
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Swiggett, Wanda D.; Kotloff, Laurie; Ezzo, Chelsea; Adler, Rachel; Oliveri, Maria Elena – ETS Research Report Series, 2014
The computer-based "Graduate Record Examinations"® ("GRE"®) revised General Test includes interactive item types and testing environment tools (e.g., test navigation, on-screen calculator, and help). How well do test takers understand these innovations? If test takers do not understand the new item types, these innovations may…
Descriptors: College Entrance Examinations, Graduate Study, Usability, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Thompson, James J.; Yang, Tong; Chauvin, Sheila W. – Applied Measurement in Education, 2009
In some professions, speed and accuracy are as important as acquired requisite knowledge and skills. The availability of computer-based testing now facilitates examination of these two important aspects of student performance. We found that student response times in a conventional non-speeded multiple-choice test, at both the global and individual…
Descriptors: Reaction Time, Test Items, Student Reaction, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Lau, Paul Ngee Kiong; Lau, Sie Hoe; Hong, Kian Sam; Usop, Hasbee – Educational Technology & Society, 2011
The number right (NR) method, in which students pick one option as the answer, is the conventional method for scoring multiple-choice tests that is heavily criticized for encouraging students to guess and failing to credit partial knowledge. In addition, computer technology is increasingly used in classroom assessment. This paper investigates the…
Descriptors: Guessing (Tests), Multiple Choice Tests, Computers, Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jordan, Sally – Practitioner Research in Higher Education, 2009
Feedback on assessment tasks has an important part to play in underpinning student learning. Online assessment enables instantaneous feedback to be given so that the student can act on it immediately. However, concern has been expressed that e-assessment tasks (especially multiple-choice questions) can encourage surface-learning. Several projects…
Descriptors: Computer Assisted Testing, Feedback (Response), Accuracy, Computer Uses in Education
Scharber, Cassandra; Dexter, Sara; Riedel, Eric – Journal of Technology, Learning, and Assessment, 2008
The purpose of this research is to analyze preservice teachers' use of and reactions to an automated essay scorer used within an online, case-based learning environment called ETIPS. Data analyzed include post-assignment surveys, a user log of students' actions within the cases, instructor-assigned scores on final essays, and interviews with four…
Descriptors: Test Scoring Machines, Essays, Student Experience, Automation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kay, Robin H.; Knaack, Liesel – Canadian Journal of Learning and Technology, 2009
The purpose of this study was to examine individual differences in attitudes toward Audience Response Systems (ARSs) in secondary school classrooms. Specifically, the impact of gender, grade, subject area, computer comfort level, participation level, and type of use were examined in 659 students. Males had significantly more positive attitudes…
Descriptors: Audience Response, Gender Differences, Secondary School Students, Feedback (Response)
Peer reviewed Peer reviewed
Hansen, Jo-Ida C.; Neuman, Jody L.; Haverkamp, Beth E.; Lubinski, Barbara R. – Measurement and Evaluation in Counseling and Development, 1997
Examined user reaction to computer-administered and paper-and-pencil-administered forms of the Strong Interest Inventory. Results indicate that user reactions to the two administration modes were reasonably similar in most areas. However, the computer group indicated more often that their version was easier to use and follow. (RJM)
Descriptors: College Students, Computer Assisted Testing, Higher Education, Interest Inventories
Peer reviewed Peer reviewed
Shermis, Mark D.; Lombard, Danielle – Computers in Human Behavior, 1998
Examines the degree to which computer and test anxiety have a predictive role in performance across three computer-administered placement tests. Subjects (72 undergraduate students) were measured with the Computer Anxiety Rating Scale, the Test Anxiety Inventory, and the Myers-Briggs Type Indicator. Results suggest that much of what is considered…
Descriptors: Computer Anxiety, Computer Assisted Testing, Computer Attitudes, Computer Literacy
Peer reviewed Peer reviewed
Rocklin, Thomas R. – Applied Measurement in Education, 1994
Effects of self-adapted testing (SAT), in which examinees choose the difficulty of items themselves, on ability estimates, precision, and efficiency, mechanisms of SAT effects, and examinee reactions to SAT are reviewed. SAT, which is less efficient than computer-adapted testing, is more efficient than fixed-item testing. (SLD)
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Difficulty Level
Peer reviewed Peer reviewed
Heppner, Frank H.; And Others – Journal of Reading, 1985
Reports that reading performance on a standardized test is better when the text is displayed in print, rather than on a computer display screen. (HOD)
Descriptors: Comparative Analysis, Computer Assisted Testing, Higher Education, Reading Comprehension
Gibbs, William J. – 1995
TestMaker is a project in which computer-based programs are being developed to help educators create tests. The program was designed as an instructional and developmental tool for teacher-education students. TestMaker consists of four modules: Advisement; Test Creation; Student Test; and Presentation Analysis. The Advisement module runs…
Descriptors: Computer Assisted Testing, Computer Software Development, Distance Education, Educational Technology
Previous Page | Next Page »
Pages: 1  |  2