Publication Date
In 2025 | 5 |
Since 2024 | 16 |
Since 2021 (last 5 years) | 70 |
Since 2016 (last 10 years) | 165 |
Since 2006 (last 20 years) | 368 |
Descriptor
Source
Author
Attali, Yigal | 5 |
Donovan, Jenny | 3 |
Kim, Sooyeon | 3 |
Lennon, Melissa | 3 |
Linn, Marcia C. | 3 |
Sinharay, Sandip | 3 |
Zechner, Klaus | 3 |
Allen, Melissa M. | 2 |
Baldwin, Peter | 2 |
Barkaoui, Khaled | 2 |
Bernstein, Jared | 2 |
More ▼ |
Publication Type
Education Level
Audience
Practitioners | 1 |
Researchers | 1 |
Location
China | 17 |
Australia | 11 |
Netherlands | 8 |
Taiwan | 8 |
United Kingdom | 7 |
Germany | 6 |
Turkey | 6 |
United Kingdom (England) | 6 |
United States | 6 |
Iran | 5 |
Japan | 5 |
More ▼ |
Laws, Policies, & Programs
Every Student Succeeds Act… | 2 |
No Child Left Behind Act 2001 | 2 |
Assessments and Surveys
What Works Clearinghouse Rating
Does not meet standards | 1 |
James Riddlesperger – ACT Education Corp., 2025
ACT announced a series of enhancements designed to modernize the ACT test and offer students more choice and flexibility in demonstrating their readiness for life after high school. The enhancements provide students more flexibility by allowing them to choose whether to take the science assessment, thereby reducing the test length by up to…
Descriptors: College Entrance Examinations, Testing, Change, Test Length
Joakim Wallmark; James O. Ramsay; Juan Li; Marie Wiberg – Journal of Educational and Behavioral Statistics, 2024
Item response theory (IRT) models the relationship between the possible scores on a test item against a test taker's attainment of the latent trait that the item is intended to measure. In this study, we compare two models for tests with polytomously scored items: the optimal scoring (OS) model, a nonparametric IRT model based on the principles of…
Descriptors: Item Response Theory, Test Items, Models, Scoring
Jessica Stinson – ProQuest LLC, 2024
Intelligence tests have been used in the United States since the early 1900s for assessing soldiers during World War I (Kaufman & Harrison, 2008; White & Hall, 1980). Presently, cognitive assessments are used in school, civil service, military, clinical, and industry settings (White & Hall, 1980). Although the results of these…
Descriptors: Graduate Students, Masters Programs, Doctoral Programs, Comparative Analysis
Yuang Wei; Bo Jiang – IEEE Transactions on Learning Technologies, 2024
Understanding student cognitive states is essential for assessing human learning. The deep neural networks (DNN)-inspired cognitive state prediction method improved prediction performance significantly; however, the lack of explainability with DNNs and the unitary scoring approach fail to reveal the factors influencing human learning. Identifying…
Descriptors: Cognitive Mapping, Models, Prediction, Short Term Memory
Dadi Ramesh; Suresh Kumar Sanampudi – European Journal of Education, 2024
Automatic essay scoring (AES) is an essential educational application in natural language processing. This automated process will alleviate the burden by increasing the reliability and consistency of the assessment. With the advances in text embedding libraries and neural network models, AES systems achieved good results in terms of accuracy.…
Descriptors: Scoring, Essays, Writing Evaluation, Memory
Raykov, Tenko – Measurement: Interdisciplinary Research and Perspectives, 2023
This software review discusses the capabilities of Stata to conduct item response theory modeling. The commands needed for fitting the popular one-, two-, and three-parameter logistic models are initially discussed. The procedure for testing the discrimination parameter equality in the one-parameter model is then outlined. The commands for fitting…
Descriptors: Item Response Theory, Models, Comparative Analysis, Item Analysis
Baldwin, Peter; Clauser, Brian E. – Journal of Educational Measurement, 2022
While score comparability across test forms typically relies on common (or randomly equivalent) examinees or items, innovations in item formats, test delivery, and efforts to extend the range of score interpretation may require a special data collection before examinees or items can be used in this way--or may be incompatible with common examinee…
Descriptors: Scoring, Testing, Test Items, Test Format
Harrison, Scott; Kroehne, Ulf; Goldhammer, Frank; Lüdtke, Oliver; Robitzsch, Alexander – Large-scale Assessments in Education, 2023
Background: Mode effects, the variations in item and scale properties attributed to the mode of test administration (paper vs. computer), have stimulated research around test equivalence and trend estimation in PISA. The PISA assessment framework provides the backbone to the interpretation of the results of the PISA test scores. However, an…
Descriptors: Scoring, Test Items, Difficulty Level, Foreign Countries
Jordan M. Wheeler; Allan S. Cohen; Shiyu Wang – Journal of Educational and Behavioral Statistics, 2024
Topic models are mathematical and statistical models used to analyze textual data. The objective of topic models is to gain information about the latent semantic space of a set of related textual data. The semantic space of a set of textual data contains the relationship between documents and words and how they are used. Topic models are becoming…
Descriptors: Semantics, Educational Assessment, Evaluators, Reliability
Dongmei Li; Shalini Kapoor; Ann Arthur; Chi-Yu Huang; YoungWoo Cho; Chen Qiu; Hongling Wang – ACT Education Corp., 2025
Starting in April 2025, ACT will introduce enhanced forms of the ACT® test for national online testing, with a full rollout to all paper and online test takers in national, state and district, and international test administrations by Spring 2026. ACT introduced major updates by changing the test lengths and testing times, providing more time per…
Descriptors: College Entrance Examinations, Testing, Change, Scoring
Jeff Allen; Ty Cruce – ACT Education Corp., 2025
This report summarizes some of the evidence supporting interpretations of scores from the enhanced ACT, focusing on reliability, concurrent validity, predictive validity, and score comparability. The authors argue that the evidence presented in this report supports the interpretation of scores from the enhanced ACT as measures of high school…
Descriptors: College Entrance Examinations, Testing, Change, Scores
Kim, Stella Yun; Lee, Won-Chan – Applied Measurement in Education, 2023
This study evaluates various scoring methods including number-correct scoring, IRT theta scoring, and hybrid scoring in terms of scale-score stability over time. A simulation study was conducted to examine the relative performance of five scoring methods in terms of preserving the first two moments of scale scores for a population in a chain of…
Descriptors: Scoring, Comparative Analysis, Item Response Theory, Simulation
Kevin C. Haudek; Xiaoming Zhai – International Journal of Artificial Intelligence in Education, 2024
Argumentation, a key scientific practice presented in the "Framework for K-12 Science Education," requires students to construct and critique arguments, but timely evaluation of arguments in large-scale classrooms is challenging. Recent work has shown the potential of automated scoring systems for open response assessments, leveraging…
Descriptors: Accuracy, Persuasive Discourse, Artificial Intelligence, Learning Management Systems
Puhan, Gautam; Kim, Sooyeon – Journal of Educational Measurement, 2022
As a result of the COVID-19 pandemic, at-home testing has become a popular delivery mode in many testing programs. When programs offer at-home testing to expand their service, the score comparability between test takers testing remotely and those testing in a test center is critical. This article summarizes statistical procedures that could be…
Descriptors: Scores, Scoring, Comparative Analysis, Testing
Wind, Stefanie A. – Measurement: Interdisciplinary Research and Perspectives, 2022
In many performance assessments, one or two raters from the complete rater pool scores each performance, resulting in a sparse rating design, where there are limited observations of each rater relative to the complete sample of students. Although sparse rating designs can be constructed to facilitate estimation of student achievement, the…
Descriptors: Evaluators, Bias, Identification, Performance Based Assessment