NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 669 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Swapna Haresh Teckwani; Amanda Huee-Ping Wong; Nathasha Vihangi Luke; Ivan Cherh Chiet Low – Advances in Physiology Education, 2024
The advent of artificial intelligence (AI), particularly large language models (LLMs) like ChatGPT and Gemini, has significantly impacted the educational landscape, offering unique opportunities for learning and assessment. In the realm of written assessment grading, traditionally viewed as a laborious and subjective process, this study sought to…
Descriptors: Accuracy, Reliability, Computational Linguistics, Standards
Peer reviewed Peer reviewed
Direct linkDirect link
Joakim Wallmark; James O. Ramsay; Juan Li; Marie Wiberg – Journal of Educational and Behavioral Statistics, 2024
Item response theory (IRT) models the relationship between the possible scores on a test item against a test taker's attainment of the latent trait that the item is intended to measure. In this study, we compare two models for tests with polytomously scored items: the optimal scoring (OS) model, a nonparametric IRT model based on the principles of…
Descriptors: Item Response Theory, Test Items, Models, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Yuang Wei; Bo Jiang – IEEE Transactions on Learning Technologies, 2024
Understanding student cognitive states is essential for assessing human learning. The deep neural networks (DNN)-inspired cognitive state prediction method improved prediction performance significantly; however, the lack of explainability with DNNs and the unitary scoring approach fail to reveal the factors influencing human learning. Identifying…
Descriptors: Cognitive Mapping, Models, Prediction, Short Term Memory
Peer reviewed Peer reviewed
Direct linkDirect link
Dadi Ramesh; Suresh Kumar Sanampudi – European Journal of Education, 2024
Automatic essay scoring (AES) is an essential educational application in natural language processing. This automated process will alleviate the burden by increasing the reliability and consistency of the assessment. With the advances in text embedding libraries and neural network models, AES systems achieved good results in terms of accuracy.…
Descriptors: Scoring, Essays, Writing Evaluation, Memory
Peer reviewed Peer reviewed
Direct linkDirect link
Elizabeth L. Wetzler; Kenneth S. Cassidy; Margaret J. Jones; Chelsea R. Frazier; Nickalous A. Korbut; Chelsea M. Sims; Shari S. Bowen; Michael Wood – Teaching of Psychology, 2025
Background: Generative artificial intelligence (AI) represents a potentially powerful, time-saving tool for grading student essays. However, little is known about how AI-generated essay scores compare to human instructor scores. Objective: The purpose of this study was to compare the essay grading scores produced by AI with those of human…
Descriptors: Essays, Writing Evaluation, Scores, Evaluators
Peer reviewed Peer reviewed
Direct linkDirect link
Pinot de Moira, Anne; Wheadon, Christopher; Christodoulou, Daisy – Research in Education, 2022
Writing is generally assessed internationally using rubric-based approaches, but there is a growing body of evidence to suggest that the reliability of such approaches is poor. In contrast, comparative judgement studies suggest that it is possible to assess open ended tasks such as writing with greater reliability. Many previous studies, however,…
Descriptors: Writing Evaluation, Classification, Accuracy, Scoring Rubrics
Peer reviewed Peer reviewed
Direct linkDirect link
Peter Daly; Emmanuelle Deglaire – Innovations in Education and Teaching International, 2025
AI-enabled assessment of student papers has the potential to provide both summative and formative feedback and reduce the time spent on grading. Using auto-ethnography, this study compares AI-enabled and human assessment of business student examination papers in a law module based on previously established rubrics. Examination papers were…
Descriptors: Artificial Intelligence, Computer Software, Technology Integration, College Faculty
Sinclair, Andrea L., Ed.; Thacker, Arthur, Ed. – Human Resources Research Organization (HumRRO), 2019
These are the appendices for the technical report, "An Investigation of the Comparability of Commission-Approved Teaching Performance Assessment Models." California's Commission on Teacher Credentialing (Commission) requires all programs of preliminary multiple and single subject teacher preparation to use a Commission-approved Teaching…
Descriptors: Performance Based Assessment, Preservice Teachers, Models, Scoring Rubrics
Peer reviewed Peer reviewed
Direct linkDirect link
Harrison, Scott; Kroehne, Ulf; Goldhammer, Frank; Lüdtke, Oliver; Robitzsch, Alexander – Large-scale Assessments in Education, 2023
Background: Mode effects, the variations in item and scale properties attributed to the mode of test administration (paper vs. computer), have stimulated research around test equivalence and trend estimation in PISA. The PISA assessment framework provides the backbone to the interpretation of the results of the PISA test scores. However, an…
Descriptors: Scoring, Test Items, Difficulty Level, Foreign Countries
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lina Listiana; Nadiya Mutiara Loka; Yuni Gayatri – Journal of Biological Education Indonesia (Jurnal Pendidikan Biologi Indonesia), 2023
The low critical thinking skills and collaboration skills of students in biology learning are caused by the dominance of conventional learning, so they require alternative learning strategies. This study aims to determine the effect of GITTW learning strategy on student critical thinking and collaboration skills. The research design used was…
Descriptors: Critical Thinking, Cooperative Learning, Science Instruction, Scoring Rubrics
Peer reviewed Peer reviewed
Direct linkDirect link
Gyamfi, George; Hanna, Barbara; Khosravi, Hassan – Assessment & Evaluation in Higher Education, 2022
Engaging students in the creation of learning resources is an effective way of developing a repository of revision items. However, a selection process is needed to separate high- from low-quality resources as some of the materials created by students can be ineffective, inappropriate or incorrect. In this study, we share our experiences and…
Descriptors: Peer Evaluation, Student Developed Materials, Educational Technology, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Jordan M. Wheeler; Allan S. Cohen; Shiyu Wang – Journal of Educational and Behavioral Statistics, 2024
Topic models are mathematical and statistical models used to analyze textual data. The objective of topic models is to gain information about the latent semantic space of a set of related textual data. The semantic space of a set of textual data contains the relationship between documents and words and how they are used. Topic models are becoming…
Descriptors: Semantics, Educational Assessment, Evaluators, Reliability
Dongmei Li; Shalini Kapoor; Ann Arthur; Chi-Yu Huang; YoungWoo Cho; Chen Qiu; Hongling Wang – ACT Education Corp., 2025
Starting in April 2025, ACT will introduce enhanced forms of the ACT® test for national online testing, with a full rollout to all paper and online test takers in national, state and district, and international test administrations by Spring 2026. ACT introduced major updates by changing the test lengths and testing times, providing more time per…
Descriptors: College Entrance Examinations, Testing, Change, Scoring
Jeff Allen; Ty Cruce – ACT Education Corp., 2025
This report summarizes some of the evidence supporting interpretations of scores from the enhanced ACT, focusing on reliability, concurrent validity, predictive validity, and score comparability. The authors argue that the evidence presented in this report supports the interpretation of scores from the enhanced ACT as measures of high school…
Descriptors: College Entrance Examinations, Testing, Change, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Stella Yun; Lee, Won-Chan – Applied Measurement in Education, 2023
This study evaluates various scoring methods including number-correct scoring, IRT theta scoring, and hybrid scoring in terms of scale-score stability over time. A simulation study was conducted to examine the relative performance of five scoring methods in terms of preserving the first two moments of scale scores for a population in a chain of…
Descriptors: Scoring, Comparative Analysis, Item Response Theory, Simulation
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  45