NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 181 to 195 of 1,057 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gorbett, Luke J.; Chapamn, Kayla E.; Liberatore, Matthew W. – Advances in Engineering Education, 2022
Spreadsheets are a core computational tool for practicing engineers and engineering students. While Microsoft Excel, Google Sheets, and other spreadsheet tools have some differences, numerous formulas, functions, and other tasks are common across versions and platforms. Building upon learning science frameworks showing that interactive activities…
Descriptors: Spreadsheets, Computer Software, Engineering Education, Textbooks
Peer reviewed Peer reviewed
Direct linkDirect link
Goolsby-Cole, Cody; Bass, Sarah M.; Stanwyck, Liz; Leupen, Sarah; Carpenter, Tara S.; Hodges, Linda C. – Journal of College Science Teaching, 2023
During the pandemic, the use of question pools for online testing was recommended to mitigate cheating, exposing multitudes of science, technology, engineering, and mathematics (STEM) students across the globe to this practice. Yet instructors may be unfamiliar with the ways that seemingly small changes between questions in a pool can expose…
Descriptors: Science Instruction, Computer Assisted Testing, Cheating, STEM Education
Peer reviewed Peer reviewed
Direct linkDirect link
Patra, Rakesh; Saha, Sujan Kumar – Education and Information Technologies, 2019
Assessment plays an important role in learning and Multiple Choice Questions (MCQs) are quite popular in large-scale evaluations. Technology-enabled learning necessitates a smart assessment. Therefore, automatic MCQ generation became increasingly popular in the last two decades. Despite a large amount of research effort, system generated MCQs are…
Descriptors: Multiple Choice Tests, High Stakes Tests, Semantics, Evaluation Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Nazaretsky, Tanya; Hershkovitz, Sara; Alexandron, Giora – International Educational Data Mining Society, 2019
Sequencing items in adaptive learning systems typically relies on a large pool of interactive question items that are analyzed into a hierarchy of skills, also known as Knowledge Components (KCs). Educational data mining techniques can be used to analyze students response data in order to optimize the mapping of items to KCs, with similarity-based…
Descriptors: Intelligent Tutoring Systems, Item Response Theory, Measurement, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Mehri Izadi; Maliheh Izadi; Farrokhlagha Heidari – Education and Information Technologies, 2024
In today's environment of growing class sizes due to the prevalence of online and e-learning systems, providing one-to-one instruction and feedback has become a challenging task for teachers. Anyhow, the dialectical integration of instruction and assessment into a seamless and dynamic activity can provide a continuous flow of assessment…
Descriptors: Adaptive Testing, Computer Assisted Testing, English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Choe, Edison M.; Kern, Justin L.; Chang, Hua-Hua – Journal of Educational and Behavioral Statistics, 2018
Despite common operationalization, measurement efficiency of computerized adaptive testing should not only be assessed in terms of the number of items administered but also the time it takes to complete the test. To this end, a recent study introduced a novel item selection criterion that maximizes Fisher information per unit of expected response…
Descriptors: Computer Assisted Testing, Reaction Time, Item Response Theory, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Viskotová, Lenka; Hampel, David – Mathematics Teaching Research Journal, 2022
Computer-aided assessment is an important tool that reduces the workload of teachers and increases the efficiency of their work. The multiple-choice test is considered to be one of the most common forms of computer-aided testing and its application for mid-term has indisputable advantages. For the purposes of a high-quality and responsible…
Descriptors: Undergraduate Students, Mathematics Tests, Computer Assisted Testing, Faculty Workload
Joshua B. Gilbert; James S. Kim; Luke W. Miratrix – Annenberg Institute for School Reform at Brown University, 2022
Analyses that reveal how treatment effects vary allow researchers, practitioners, and policymakers to better understand the efficacy of educational interventions. In practice, however, standard statistical methods for addressing Heterogeneous Treatment Effects (HTE) fail to address the HTE that may exist within outcome measures. In this study, we…
Descriptors: Item Response Theory, Models, Formative Evaluation, Statistical Inference
Jing Lu; Chun Wang; Jiwei Zhang; Xue Wang – Grantee Submission, 2023
Changepoints are abrupt variations in a sequence of data in statistical inference. In educational and psychological assessments, it is pivotal to properly differentiate examinees' aberrant behaviors from solution behavior to ensure test reliability and validity. In this paper, we propose a sequential Bayesian changepoint detection algorithm to…
Descriptors: Bayesian Statistics, Behavior Patterns, Computer Assisted Testing, Accuracy
Albano, Anthony D.; McConnell, Scott R.; Lease, Erin M.; Cai, Liuhan – Grantee Submission, 2020
Research has shown that the context of practice tasks can have a significant impact on learning, with long-term retention and transfer improving when tasks of different types are mixed by interleaving (abcabcabc) compared with grouping together in blocks (aaabbbccc). This study examines the influence of context via interleaving from a psychometric…
Descriptors: Context Effect, Test Items, Preschool Children, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Russell, Michael; Moncaleano, Sebastian – Educational Assessment, 2019
Over the past decade, large-scale testing programs have employed technology-enhanced items (TEI) to improve the fidelity with which an item measures a targeted construct. This paper presents findings from a review of released TEIs employed by large-scale testing programs worldwide. Analyses examine the prevalence with which different types of TEIs…
Descriptors: Computer Assisted Testing, Fidelity, Elementary Secondary Education, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Rafatbakhsh, Elaheh; Ahmadi, Alireza; Moloodi, Amirsaeid; Mehrpour, Saeed – Educational Measurement: Issues and Practice, 2021
Test development is a crucial, yet difficult and time-consuming part of any educational system, and the task often falls all on teachers. Automatic item generation systems have recently drawn attention as they can reduce this burden and make test development more convenient. Such systems have been developed to generate items for vocabulary,…
Descriptors: Test Construction, Test Items, Computer Assisted Testing, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Lishan; VanLehn, Kurt – Interactive Learning Environments, 2021
Despite their drawback, multiple-choice questions are an enduring feature in instruction because they can be answered more rapidly than open response questions and they are easily scored. However, it can be difficult to generate good incorrect choices (called "distractors"). We designed an algorithm to generate distractors from a…
Descriptors: Semantics, Networks, Multiple Choice Tests, Teaching Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Cappaert, Kevin J.; Wen, Yao; Chang, Yu-Feng – Measurement: Interdisciplinary Research and Perspectives, 2018
Events such as curriculum changes or practice effects can lead to item parameter drift (IPD) in computer adaptive testing (CAT). The current investigation introduced a point- and weight-adjusted D[superscript 2] method for IPD detection for use in a CAT environment when items are suspected of drifting across test administrations. Type I error and…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Items, Identification
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lynch, Sarah – Practical Assessment, Research & Evaluation, 2022
In today's digital age, tests are increasingly being delivered on computers. Many of these computer-based tests (CBTs) have been adapted from paper-based tests (PBTs). However, this change in mode of test administration has the potential to introduce construct-irrelevant variance, affecting the validity of score interpretations. Because of this,…
Descriptors: Computer Assisted Testing, Tests, Scores, Scoring
Pages: 1  |  ...  |  9  |  10  |  11  |  12  |  13  |  14  |  15  |  16  |  17  |  ...  |  71