NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 151 to 165 of 7,203 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Matthew D. Coss – Language Learning & Technology, 2025
The extent to which writing modality (i.e., hand-writing vs. keyboarding) impacts second-language (L2) writing assessment scores remains unclear. For alphabetic languages like English, research shows mixed results, documenting both equivalent and divergent scores between typed and handwritten tests (e.g., Barkaoui & Knouzi, 2018). However, for…
Descriptors: Computer Assisted Testing, Paper and Pencil Tests, Second Language Learning, Chinese
Peer reviewed Peer reviewed
Direct linkDirect link
Vy Le; Jayson M. Nissen; Xiuxiu Tang; Yuxiao Zhang; Amirreza Mehrabi; Jason W. Morphew; Hua Hua Chang; Ben Van Dusen – Physical Review Physics Education Research, 2025
In physics education research, instructors and researchers often use research-based assessments (RBAs) to assess students' skills and knowledge. In this paper, we support the development of a mechanics cognitive diagnostic to test and implement effective and equitable pedagogies for physics instruction. Adaptive assessments using cognitive…
Descriptors: Physics, Science Education, Scientific Concepts, Diagnostic Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Robert N. Prince – Numeracy, 2025
One of the effects of the COVID-19 pandemic was the rapid shift to replacing traditional, paper-based tests with their computer-based counterparts. In many cases, these new modes of delivering tests will remain in place for the foreseeable future. In South Africa, the National Benchmark Quantitative Literacy (QL) test was impelled to make this…
Descriptors: Benchmarking, Numeracy, Multiple Literacies, Paper and Pencil Tests
Joanna Williamson – Research Matters, 2025
Teachers, examiners and assessment experts know from experience that some candidates annotate exam questions. "Annotation" includes anything the candidate writes or draws outside of the designated response space, such as underlining, jotting, circling, sketching and calculating. Annotations are of interest because they may evidence…
Descriptors: Mathematics, Tests, Documentation, Secondary Education
Peer reviewed Peer reviewed
Direct linkDirect link
Ethan Roy; Mathieu Guillaume; Amandine Van Rinsveld; Project iLead Consortium; Bruce D. McCandliss – npj Science of Learning, 2025
Arithmetic fluency is regarded as a foundational math skill, typically measured as a single construct with pencil-and-paper-based timed assessments. We introduce a tablet-based assessment of single-digit fluency that captures individual trial response times across several embedded experimental contrasts of interest. A large (n = 824) cohort of…
Descriptors: Arithmetic, Mathematics Skills, Tablet Computers, Grade 3
Peer reviewed Peer reviewed
Direct linkDirect link
Michael Bass; Scott Morris; Sheng Zhang – Measurement: Interdisciplinary Research and Perspectives, 2025
Administration of patient-reported outcome measures (PROs), using multidimensional computer adaptive tests (MCATs) has the potential to reduce patient burden, but the efficiency of MCAT depends on the degree to which an individual's responses fit the psychometric properties of the assessment. Assessing patients' symptom burden through the…
Descriptors: Adaptive Testing, Computer Assisted Testing, Patients, Outcome Measures
Peer reviewed Peer reviewed
Direct linkDirect link
Yi-Jui I. Chen; Yi-Jhen Wu; Yi-Hsin Chen; Robin Irey – Journal of Psychoeducational Assessment, 2025
A short form of the 60-item computer-based orthographic processing assessment (long-form COPA or COPA-LF) was developed. The COPA-LF consists of five skills, including rapid perception, access, differentiation, correction, and arrangement. Thirty items from the COPA-LF were selected for the short-form COPA (COPA-SF) based on cognitive diagnostic…
Descriptors: Computer Assisted Testing, Test Length, Test Validity, Orthographic Symbols
Peer reviewed Peer reviewed
Direct linkDirect link
Xuefan Li; Marco Zappatore; Tingsong Li; Weiwei Zhang; Sining Tao; Xiaoqing Wei; Xiaoxu Zhou; Naiqing Guan; Anny Chan – IEEE Transactions on Learning Technologies, 2025
The integration of generative artificial intelligence (GAI) into educational settings offers unprecedented opportunities to enhance the efficiency of teaching and the effectiveness of learning, particularly within online platforms. This study evaluates the development and application of a customized GAI-powered teaching assistant, trained…
Descriptors: Artificial Intelligence, Technology Uses in Education, Student Evaluation, Academic Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Jonathan Liu; Seth Poulsen; Erica Goodwin; Hongxuan Chen; Grace Williams; Yael Gertner; Diana Franklin – ACM Transactions on Computing Education, 2025
Algorithm design is a vital skill developed in most undergraduate Computer Science (CS) programs, but few research studies focus on pedagogy related to algorithms coursework. To understand the work that has been done in the area, we present a systematic survey and literature review of CS Education studies. We search for research that is both…
Descriptors: Teaching Methods, Algorithms, Design, Computer Science Education
Peer reviewed Peer reviewed
Direct linkDirect link
Olaf Lund; Rune Raudeberg; Hans Johansen; Mette-Line Myhre; Espen Walderhaug; Amir Poreh; Jens Egeland – Journal of Attention Disorders, 2025
Objective: The Conners Continuous Performance Test-3 (CCPT-3) is a computerized test of attention frequently used in clinical neuropsychology. In the present factor analysis, we seek to assess the factor structure of the CCPT-3 and evaluate the suggested dimensions in the CCPT-3 Manual. Method: Data from a mixed clinical sample of 931 adults…
Descriptors: Factor Structure, Factor Analysis, Attention Span, Measures (Individuals)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ilhama Mammadova; Fatime Ismayilli; Elnaz Aliyeva; Narmin Mammadova – Educational Process: International Journal, 2025
Background/purpose: Artificial Intelligence (AI) is increasingly shaping assessment practices in higher education, promising faster feedback and reduced instructor workload while also raising concerns about fairness and transparency. This study examines how AI technologies are transforming assessment processes and the experiences of stakeholders.…
Descriptors: Artificial Intelligence, Student Evaluation, Technology Uses in Education, Undergraduate Students
Peer reviewed Peer reviewed
Direct linkDirect link
Wallace N. Pinto Jr.; Jinnie Shin – Journal of Educational Measurement, 2025
In recent years, the application of explainability techniques to automated essay scoring and automated short-answer grading (ASAG) models, particularly those based on transformer architectures, has gained significant attention. However, the reliability and consistency of these techniques remain underexplored. This study systematically investigates…
Descriptors: Automation, Grading, Computer Assisted Testing, Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Daniel Lupiya Mpolomoka – Pedagogical Research, 2025
Overview: This systematic review explores the utilization of artificial intelligence (AI) for assessment, grading, and feedback in higher education. The review aims to establish how AI technologies enhance efficiency, scalability, and personalized learning experiences in educational settings, while addressing associated challenges that arise due…
Descriptors: Artificial Intelligence, Higher Education, Evaluation Methods, Literature Reviews
Peer reviewed Peer reviewed
Direct linkDirect link
Nathaniel Owen; Ananda Senel – Review of Education, 2025
Transparency in high-stakes English language assessment has become crucial for ensuring fairness and maintaining assessment validity in language testing. However, our understanding of how transparency is conceptualised and implemented remains fragmented, particularly in relation to stakeholder experiences and technological innovations. This study…
Descriptors: Accountability, High Stakes Tests, Language Tests, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Sun-Joo Cho; Goodwin Amanda; Jorge Salas; Sophia Mueller – Grantee Submission, 2025
This study incorporates a random forest (RF) approach to probe complex interactions and nonlinearity among predictors into an item response model with the goal of using a hybrid approach to outperform either an RF or explanatory item response model (EIRM) only in explaining item responses. In the specified model, called EIRM-RF, predicted values…
Descriptors: Item Response Theory, Artificial Intelligence, Statistical Analysis, Predictor Variables
Pages: 1  |  ...  |  7  |  8  |  9  |  10  |  11  |  12  |  13  |  14  |  15  |  ...  |  481