Publication Date
In 2025 | 3 |
Since 2024 | 7 |
Descriptor
Statistical Analysis | 7 |
Test Items | 7 |
Equated Scores | 4 |
Accuracy | 2 |
Computation | 2 |
Error of Measurement | 2 |
Factor Structure | 2 |
Test Bias | 2 |
Test Format | 2 |
Ability | 1 |
Ability Grouping | 1 |
More ▼ |
Source
Practical Assessment,… | 3 |
Cognitive Research:… | 1 |
Journal of Applied Research… | 1 |
Journal of Educational… | 1 |
Journal of Educational and… | 1 |
Author
Clarisse van Rensburg | 1 |
Haeju Lee | 1 |
Inga Laukaityte | 1 |
Jianbin Fu | 1 |
Karina Mostert | 1 |
Kyung Yong Kim | 1 |
Marc Brysbaert | 1 |
Marie Wiberg | 1 |
Mark Wilson | 1 |
Reitumetse Machaba | 1 |
Tom Benton | 1 |
More ▼ |
Publication Type
Journal Articles | 7 |
Reports - Research | 5 |
Reports - Descriptive | 2 |
Education Level
Higher Education | 2 |
Postsecondary Education | 2 |
Elementary Education | 1 |
Audience
Location
South Africa | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Haeju Lee; Kyung Yong Kim – Journal of Educational Measurement, 2025
When no prior information of differential item functioning (DIF) exists for items in a test, either the rank-based or iterative purification procedure might be preferred. The rank-based purification selects anchor items based on a preliminary DIF test. For a preliminary DIF test, likelihood ratio test (LRT) based approaches (e.g.,…
Descriptors: Test Items, Equated Scores, Test Bias, Accuracy
Tom Benton – Practical Assessment, Research & Evaluation, 2025
This paper proposes an extension of linear equating that may be useful in one of two fairly common assessment scenarios. One is where different students have taken different combinations of test forms. This might occur, for example, where students have some free choice over the exam papers they take within a particular qualification. In this…
Descriptors: Equated Scores, Test Format, Test Items, Computation
Jianbin Fu; TsungHan Ho; Xuan Tan – Practical Assessment, Research & Evaluation, 2025
Item parameter estimation using an item response theory (IRT) model with fixed ability estimates is useful in equating with small samples on anchor items. The current study explores the impact of three ability estimation methods (weighted likelihood estimation [WLE], maximum a posteriori [MAP], and posterior ability distribution estimation [PST])…
Descriptors: Item Response Theory, Test Items, Computation, Equated Scores
Mark Wilson – Journal of Educational and Behavioral Statistics, 2024
This article introduces a new framework for articulating how educational assessments can be related to teacher uses in the classroom. It articulates three levels of assessment: macro (use of standardized tests), meso (externally developed items), and micro (on-the-fly in the classroom). The first level is the usual context for educational…
Descriptors: Educational Assessment, Measurement, Standardized Tests, Test Items
Marc Brysbaert – Cognitive Research: Principles and Implications, 2024
Experimental psychology is witnessing an increase in research on individual differences, which requires the development of new tasks that can reliably assess variations among participants. To do this, cognitive researchers need statistical methods that many researchers have not learned during their training. The lack of expertise can pose…
Descriptors: Experimental Psychology, Individual Differences, Statistical Analysis, Task Analysis
Inga Laukaityte; Marie Wiberg – Practical Assessment, Research & Evaluation, 2024
The overall aim was to examine effects of differences in group ability and features of the anchor test form on equating bias and the standard error of equating (SEE) using both real and simulated data. Chained kernel equating, Postratification kernel equating, and Circle-arc equating were studied. A college admissions test with four different…
Descriptors: Ability Grouping, Test Items, College Entrance Examinations, High Stakes Tests
Karina Mostert; Clarisse van Rensburg; Reitumetse Machaba – Journal of Applied Research in Higher Education, 2024
Purpose: This study examined the psychometric properties of intention to drop out and study satisfaction measures for first-year South African students. The factorial validity, item bias, measurement invariance and reliability were tested. Design/methodology/approach: A cross-sectional design was used. For the study on intention to drop out, 1,820…
Descriptors: Intention, Potential Dropouts, Student Satisfaction, Test Items