NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20260
Since 202513
Since 2022 (last 5 years)67
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 67 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Lae Lae Shwe; Sureena Matayong; Suntorn Witosurapot – Education and Information Technologies, 2024
Multiple Choice Questions (MCQs) are an important evaluation technique for both examinations and learning activities. However, the manual creation of questions is time-consuming and challenging for teachers. Hence, there is a notable demand for an Automatic Question Generation (AQG) system. Several systems have been created for this aim, but the…
Descriptors: Difficulty Level, Computer Assisted Testing, Adaptive Testing, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Jyoti Prakash Meher; Rajib Mall – IEEE Transactions on Education, 2025
Contribution: This article suggests a novel method for diagnosing a learner's cognitive proficiency using deep neural networks (DNNs) based on her answers to a series of questions. The outcome of the forecast can be used for adaptive assistance. Background: Often a learner spends considerable amounts of time in attempting questions on the concepts…
Descriptors: Cognitive Ability, Assistive Technology, Adaptive Testing, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Nathaniel Owen; Ananda Senel – Review of Education, 2025
Transparency in high-stakes English language assessment has become crucial for ensuring fairness and maintaining assessment validity in language testing. However, our understanding of how transparency is conceptualised and implemented remains fragmented, particularly in relation to stakeholder experiences and technological innovations. This study…
Descriptors: Accountability, High Stakes Tests, Language Tests, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Guozhu Ding; Mailin Li; Shan Li; Hao Wu – Asia Pacific Education Review, 2025
This study investigated the optimal feedback intervals for tasks of varying difficulty levels in online testing and whether task difficulty moderates the effect of feedback intervals on student performance. A pre-experimental study with 36 students was conducted to determine the delayed time for providing feedback based on student behavioral data.…
Descriptors: Feedback (Response), Academic Achievement, Computer Assisted Testing, Intervals
Peer reviewed Peer reviewed
Direct linkDirect link
Ebru Balta; Celal Deha Dogan – SAGE Open, 2024
As computer-based testing becomes more prevalent, the attention paid to response time (RT) in assessment practice and psychometric research correspondingly increases. This study explores the rate of Type I error in detecting preknowledge cheating behaviors, the power of the Kullback-Leibler (KL) divergence measure, and the L person fit statistic…
Descriptors: Cheating, Accuracy, Reaction Time, Computer Assisted Testing
Ashish Gurung; Kirk Vanacore; Andrew A. McReynolds; Korinn S. Ostrow; Eamon S. Worden; Adam C. Sales; Neil T. Heffernan – Grantee Submission, 2024
Learning experience designers consistently balance the trade-off between open and close-ended activities. The growth and scalability of Computer Based Learning Platforms (CBLPs) have only magnified the importance of these design trade-offs. CBLPs often utilize close-ended activities (i.e. Multiple-Choice Questions [MCQs]) due to feasibility…
Descriptors: Multiple Choice Tests, Testing, Test Format, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Kárász, Judit T.; Széll, Krisztián; Takács, Szabolcs – Quality Assurance in Education: An International Perspective, 2023
Purpose: Based on the general formula, which depends on the length and difficulty of the test, the number of respondents and the number of ability levels, this study aims to provide a closed formula for the adaptive tests with medium difficulty (probability of solution is p = 1/2) to determine the accuracy of the parameters for each item and in…
Descriptors: Test Length, Probability, Comparative Analysis, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Betts, Joe; Muntean, William; Kim, Doyoung; Kao, Shu-chuan – Educational and Psychological Measurement, 2022
The multiple response structure can underlie several different technology-enhanced item types. With the increased use of computer-based testing, multiple response items are becoming more common. This response type holds the potential for being scored polytomously for partial credit. However, there are several possible methods for computing raw…
Descriptors: Scoring, Test Items, Test Format, Raw Scores
Sherwin E. Balbuena – Online Submission, 2024
This study introduces a new chi-square test statistic for testing the equality of response frequencies among distracters in multiple-choice tests. The formula uses the information from the number of correct answers and wrong answers, which becomes the basis of calculating the expected values of response frequencies per distracter. The method was…
Descriptors: Multiple Choice Tests, Statistics, Test Validity, Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ayfer Sayin; Sabiha Bozdag; Mark J. Gierl – International Journal of Assessment Tools in Education, 2023
The purpose of this study is to generate non-verbal items for a visual reasoning test using templated-based automatic item generation (AIG). The fundamental research method involved following the three stages of template-based AIG. An item from the 2016 4th-grade entrance exam of the Science and Art Center (known as BILSEM) was chosen as the…
Descriptors: Test Items, Test Format, Nonverbal Tests, Visual Measures
Peer reviewed Peer reviewed
Andreea Dutulescu; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Assessing the difficulty of reading comprehension questions is crucial to educational methodologies and language understanding technologies. Traditional methods of assessing question difficulty rely frequently on human judgments or shallow metrics, often failing to accurately capture the intricate cognitive demands of answering a question. This…
Descriptors: Difficulty Level, Reading Tests, Test Items, Reading Comprehension
Peer reviewed Peer reviewed
Direct linkDirect link
Kam, Chester Chun Seng – Educational and Psychological Measurement, 2023
When constructing measurement scales, regular and reversed items are often used (e.g., "I am satisfied with my job"/"I am not satisfied with my job"). Some methodologists recommend excluding reversed items because they are more difficult to understand and therefore engender a second, artificial factor distinct from the…
Descriptors: Test Items, Difficulty Level, Test Construction, Construct Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Mehri Izadi; Maliheh Izadi; Farrokhlagha Heidari – Education and Information Technologies, 2024
In today's environment of growing class sizes due to the prevalence of online and e-learning systems, providing one-to-one instruction and feedback has become a challenging task for teachers. Anyhow, the dialectical integration of instruction and assessment into a seamless and dynamic activity can provide a continuous flow of assessment…
Descriptors: Adaptive Testing, Computer Assisted Testing, English (Second Language), Second Language Learning
Xue, Kang; Huggins-Manley, Anne Corinne; Leite, Walter – Educational and Psychological Measurement, 2022
In data collected from virtual learning environments (VLEs), item response theory (IRT) models can be used to guide the ongoing measurement of student ability. However, such applications of IRT rely on unbiased item parameter estimates associated with test items in the VLE. Without formal piloting of the items, one can expect a large amount of…
Descriptors: Virtual Classrooms, Artificial Intelligence, Item Response Theory, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Julian Marvin Jörs; Ernesto William De Luca – Technology, Knowledge and Learning, 2025
The real-time availability of information and the intelligence of information systems have changed the way we deal with information. Current research is primarily concerned with the interplay between internal and external memory, i.e., how much and which forms of cognitively demanding processes we handle internally and when we use external storage…
Descriptors: Ethics, Learning Processes, Technology Uses in Education, Influence of Technology
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5