NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 61 to 75 of 7,213 results Save | Export
Victoria Crisp; Sylvia Vitello; Abdullah Ali Khan; Heather Mahy; Sarah Hughes – Research Matters, 2025
This research set out to enhance our understanding of the exam techniques and types of written annotations or markings that learners may wish to use to support their thinking when taking digital multiple-choice exams. Additionally, we aimed to further explore issues around the factors that contribute to learners writing less rough work and…
Descriptors: Computer Assisted Testing, Test Format, Multiple Choice Tests, Notetaking
Peer reviewed Peer reviewed
Direct linkDirect link
Angela Chamberlain; Emily D'Arcy; Andrew J. O. Whitehouse; Kerry Wallace; Maya Hayden-Evans; Sonya Girdler; Benjamin Milbourn; Sven Bölte; Kiah Evans – Journal of Autism and Developmental Disorders, 2025
Purpose: The PEDI-CAT (ASD) is used to assess functioning of children and youth on the autism spectrum; however, current psychometric evidence is limited. This study aimed to explore the reliability, validity and acceptability of the PEDI-CAT (ASD) using a large Australian sample. Methods: Caregivers of 134 children and youth on the spectrum…
Descriptors: Autism Spectrum Disorders, Children, Youth, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Kylie Gorney; Mark D. Reckase – Journal of Educational Measurement, 2025
In computerized adaptive testing, item exposure control methods are often used to provide a more balanced usage of the item pool. Many of the most popular methods, including the restricted method (Revuelta and Ponsoda), use a single maximum exposure rate to limit the proportion of times that each item is administered. However, Barrada et al.…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Item Banks
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Melodie Philhours; Kelly E. Fish – Research & Practice in Assessment, 2025
This study leverages data from direct assessments of learning (AoL) to build a dynamic model of student performance in competency exams related to computer technology. The analysis reveals three key predictors that strongly influence student success: performance on a practice exam, whether or not a student engaged in practice testing beforehand,…
Descriptors: Technological Literacy, Success, Tests, Drills (Practice)
Ohio Department of Education and Workforce, 2025
The Ohio Department of Education and Workforce, in response to Senate Bill 168 (135th General Assembly), initiated a pilot program in the 2024-2025 school year to test the feasibility of remotely administered and proctored state assessments. This pilot aimed to explore the potential of remote testing to enhance flexibility and accessibility for…
Descriptors: Examiners, Supervision, Electronic Learning, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Mingfeng Xue; Yunting Liu; Xingyao Xiao; Mark Wilson – Journal of Educational Measurement, 2025
Prompts play a crucial role in eliciting accurate outputs from large language models (LLMs). This study examines the effectiveness of an automatic prompt engineering (APE) framework for automatic scoring in educational measurement. We collected constructed-response data from 930 students across 11 items and used human scores as the true labels. A…
Descriptors: Computer Assisted Testing, Prompting, Educational Assessment, Automation
Peer reviewed Peer reviewed
Andreea Dutulescu; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Assessing the difficulty of reading comprehension questions is crucial to educational methodologies and language understanding technologies. Traditional methods of assessing question difficulty rely frequently on human judgments or shallow metrics, often failing to accurately capture the intricate cognitive demands of answering a question. This…
Descriptors: Difficulty Level, Reading Tests, Test Items, Reading Comprehension
Peer reviewed Peer reviewed
Direct linkDirect link
Guher Gorgun; Okan Bulut – Education and Information Technologies, 2024
In light of the widespread adoption of technology-enhanced learning and assessment platforms, there is a growing demand for innovative, high-quality, and diverse assessment questions. Automatic Question Generation (AQG) has emerged as a valuable solution, enabling educators and assessment developers to efficiently produce a large volume of test…
Descriptors: Computer Assisted Testing, Test Construction, Test Items, Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Yang Zhen; Xiaoyan Zhu – Educational and Psychological Measurement, 2024
The pervasive issue of cheating in educational tests has emerged as a paramount concern within the realm of education, prompting scholars to explore diverse methodologies for identifying potential transgressors. While machine learning models have been extensively investigated for this purpose, the untapped potential of TabNet, an intricate deep…
Descriptors: Artificial Intelligence, Models, Cheating, Identification
Peer reviewed Peer reviewed
Direct linkDirect link
Jian Zhao; Elaine Chapman; Peyman G. P. Sabet – Education Research and Perspectives, 2024
The launch of ChatGPT and the rapid proliferation of generative AI (GenAI) have brought transformative changes to education, particularly in the field of assessment. This has prompted a fundamental rethinking of traditional assessment practices, presenting both opportunities and challenges in evaluating student learning. While numerous studies…
Descriptors: Literature Reviews, Artificial Intelligence, Evaluation Methods, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, W. Holmes – Educational and Psychological Measurement, 2023
Psychometricians have devoted much research and attention to categorical item responses, leading to the development and widespread use of item response theory for the estimation of model parameters and identification of items that do not perform in the same way for examinees from different population subgroups (e.g., differential item functioning…
Descriptors: Test Bias, Item Response Theory, Computation, Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Pan, Yiqin; Livne, Oren; Wollack, James A.; Sinharay, Sandip – Educational Measurement: Issues and Practice, 2023
In computerized adaptive testing, overexposure of items in the bank is a serious problem and might result in item compromise. We develop an item selection algorithm that utilizes the entire bank well and reduces the overexposure of items. The algorithm is based on collaborative filtering and selects an item in two stages. In the first stage, a set…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Algorithms
Peer reviewed Peer reviewed
Direct linkDirect link
Zahra Banitalebi; Masoomeh Estaji; Gavin T. L. Brown – Educational Technology & Society, 2025
The significance of teacher's assessment literacy (AL) was originally captured by the 1990 standards for teacher's competence in educational assessment. Competence in assessment has changed with the widespread use of recent technology advancements in educational assessment. Consequently, new measures are needed to measure Teacher Assessment…
Descriptors: Assessment Literacy, Computer Assisted Testing, Measurement Techniques, Questionnaires
Peer reviewed Peer reviewed
Direct linkDirect link
Mo Zhang; Paul Deane; Andrew Hoang; Hongwen Guo; Chen Li – Educational Measurement: Issues and Practice, 2025
In this paper, we describe two empirical studies that demonstrate the application and modeling of keystroke logs in writing assessments. We illustrate two different approaches of modeling differences in writing processes: analysis of mean differences in handcrafted theory-driven features and use of large language models to identify stable personal…
Descriptors: Writing Tests, Computer Assisted Testing, Keyboarding (Data Entry), Writing Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Sukru Murat Cebeci; Selcuk Acar – Journal of Creative Behavior, 2025
This study presents the Cebeci Test of Creativity (CTC), a novel computerized assessment tool designed to address the limitations of traditional open-ended paper-and-pencil creativity tests. The CTC is designed to overcome the challenges associated with the administration and manual scoring of traditional paper and pencil creativity tests. In this…
Descriptors: Creativity, Creativity Tests, Test Construction, Test Validity
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  481