NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Pell Grant Program1
What Works Clearinghouse Rating
Showing 1 to 15 of 300 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ebru Balta; Celal Deha Dogan – SAGE Open, 2024
As computer-based testing becomes more prevalent, the attention paid to response time (RT) in assessment practice and psychometric research correspondingly increases. This study explores the rate of Type I error in detecting preknowledge cheating behaviors, the power of the Kullback-Leibler (KL) divergence measure, and the L person fit statistic…
Descriptors: Cheating, Accuracy, Reaction Time, Computer Assisted Testing
Peer reviewed Peer reviewed
Andreea Dutulescu; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Assessing the difficulty of reading comprehension questions is crucial to educational methodologies and language understanding technologies. Traditional methods of assessing question difficulty rely frequently on human judgments or shallow metrics, often failing to accurately capture the intricate cognitive demands of answering a question. This…
Descriptors: Difficulty Level, Reading Tests, Test Items, Reading Comprehension
Peer reviewed Peer reviewed
Direct linkDirect link
Jyoti Prakash Meher; Rajib Mall – IEEE Transactions on Education, 2025
Contribution: This article suggests a novel method for diagnosing a learner's cognitive proficiency using deep neural networks (DNNs) based on her answers to a series of questions. The outcome of the forecast can be used for adaptive assistance. Background: Often a learner spends considerable amounts of time in attempting questions on the concepts…
Descriptors: Cognitive Ability, Assistive Technology, Adaptive Testing, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Aditya Shah; Ajay Devmane; Mehul Ranka; Prathamesh Churi – Education and Information Technologies, 2024
Online learning has grown due to the advancement of technology and flexibility. Online examinations measure students' knowledge and skills. Traditional question papers include inconsistent difficulty levels, arbitrary question allocations, and poor grading. The suggested model calibrates question paper difficulty based on student performance to…
Descriptors: Computer Assisted Testing, Difficulty Level, Grading, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Lahza, Hatim; Smith, Tammy G.; Khosravi, Hassan – British Journal of Educational Technology, 2023
Traditional item analyses such as classical test theory (CTT) use exam-taker responses to assessment items to approximate their difficulty and discrimination. The increased adoption by educational institutions of electronic assessment platforms (EAPs) provides new avenues for assessment analytics by capturing detailed logs of an exam-taker's…
Descriptors: Medical Students, Evaluation, Computer Assisted Testing, Time Factors (Learning)
Peer reviewed Peer reviewed
Direct linkDirect link
Lae Lae Shwe; Sureena Matayong; Suntorn Witosurapot – Education and Information Technologies, 2024
Multiple Choice Questions (MCQs) are an important evaluation technique for both examinations and learning activities. However, the manual creation of questions is time-consuming and challenging for teachers. Hence, there is a notable demand for an Automatic Question Generation (AQG) system. Several systems have been created for this aim, but the…
Descriptors: Difficulty Level, Computer Assisted Testing, Adaptive Testing, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Moon, Jung Aa; Lindner, Marlit Annalena; Arslan, Burcu; Keehner, Madeleine – Educational Measurement: Issues and Practice, 2022
Many test items use both an image and text, but present them in a spatially separate manner. This format could potentially cause a split-attention effect in which the test taker's cognitive load is increased by having to split attention between the image and text, while mentally integrating the two sources of information. We investigated the…
Descriptors: Computer Assisted Testing, Cognitive Processes, Difficulty Level, Attention
Peer reviewed Peer reviewed
Direct linkDirect link
Shen, Jing; Wu, Jingwei – Journal of Speech, Language, and Hearing Research, 2022
Purpose: This study examined the performance difference between remote and in-laboratory test modalities with a speech recognition in noise task in older and younger adults. Method: Four groups of participants (younger remote, younger in-laboratory, older remote, and older in-laboratory) were tested on a speech recognition in noise protocol with…
Descriptors: Age Differences, Test Format, Computer Assisted Testing, Auditory Perception
Peer reviewed Peer reviewed
Direct linkDirect link
Gruss, Richard; Clemons, Josh – Journal of Computer Assisted Learning, 2023
Background: The sudden growth in online instruction due to COVID-19 restrictions has given renewed urgency to questions about remote learning that have remained unresolved. Web-based assessment software provides instructors an array of options for varying testing parameters, but the pedagogical impacts of some of these variations has yet to be…
Descriptors: Test Items, Test Format, Computer Assisted Testing, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Andrea Révész; Hyeonjeong Jeong; Shungo Suzuki; Haining Cui; Shunsui Matsuura; Kazuya Saito; Motoaki Sugiura – Studies in Second Language Acquisition, 2024
The last three decades have seen significant development in understanding and describing the effects of task complexity on learner internal processes. However, researchers have primarily employed behavioral methods to investigate task-generated cognitive load. Being the first to adopt neuroimaging to study second language (L2) task effects, we…
Descriptors: Foreign Countries, English (Second Language), Second Language Learning, Decision Making Skills
Peer reviewed Peer reviewed
Direct linkDirect link
Spino, LeAnne L.; Echevarría, Megan M.; Wu, Yu – Foreign Language Annals, 2022
The ACTFL Oral Proficiency Interview--computer (OPIc) employs a self-assessment instrument to determine the nature of the speaking prompts to which the test taker will respond and, thus the difficulty of the test. Grounded in research demonstrating varying levels of accuracy in self-assessment among language learners, this study examines the…
Descriptors: Computer Assisted Testing, Oral Language, Language Proficiency, Self Evaluation (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Pengelley, James; Whipp, Peter R.; Rovis-Hermann, Nina – Educational Psychology Review, 2023
The aim of the present study is to reconcile previous findings (a) that testing mode has no effect on test outcomes or cognitive load (Comput Hum Behav 77:1-10, 2017) and (b) that younger learners' working memory processes are more sensitive to computer-based test formats (J Psychoeduc Assess 37(3):382-394, 2019). We addressed key methodological…
Descriptors: Scores, Cognitive Processes, Difficulty Level, Secondary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Haoxin Xu; Tianrun Deng; Xianlong Xu; Xiaoqing Gu; Lingyun Huang; Haoran Xie; Minhong Wang – Education and Information Technologies, 2025
In the 21st century, the urgent educational demand for cultivating complex skills in vocational training and learning is met with the effectiveness of the four-component instructional design model. Despite its success, research has identified a notable gap in the address of formative assessment, particularly within computer-supported frameworks.…
Descriptors: Models, Instructional Design, Computer Assisted Testing, Formative Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Cassondra M. Eng; Aria Tsegai-Moore; Anna V. Fisher – Grantee Submission, 2024
Computerized assessments and digital games have become more prevalent in childhood, necessitating a systematic investigation of the effects of gamified executive function assessments on performance and engagement. This study examined the feasibility of incorporating gamification and a machine learning algorithm that adapts task difficulty to…
Descriptors: Preschool Children, Preschool Curriculum, Preschool Education, Preschool Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Wyse, Adam E.; McBride, James R. – Journal of Educational Measurement, 2021
A key consideration when giving any computerized adaptive test (CAT) is how much adaptation is present when the test is used in practice. This study introduces a new framework to measure the amount of adaptation of Rasch-based CATs based on looking at the differences between the selected item locations (Rasch item difficulty parameters) of the…
Descriptors: Item Response Theory, Computer Assisted Testing, Adaptive Testing, Test Items
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  20