Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 4 |
Since 2016 (last 10 years) | 5 |
Since 2006 (last 20 years) | 6 |
Descriptor
Guidelines | 7 |
Test Length | 7 |
Test Items | 6 |
Simulation | 5 |
Sample Size | 4 |
Comparative Analysis | 3 |
Error of Measurement | 3 |
Item Analysis | 3 |
Item Response Theory | 3 |
Correlation | 2 |
Effect Size | 2 |
More ▼ |
Author
Andersson, Björn | 1 |
Banks, Kathleen | 1 |
Cappaert, Kevin | 1 |
Choi, Youn-Jeng | 1 |
Goodrich, J. Marc | 1 |
Guo, Wenjing | 1 |
Huang, Feifei | 1 |
Koziol, Natalie A. | 1 |
Kárász, Judit T. | 1 |
Lee, Won-Chan | 1 |
Lewis, Charles | 1 |
More ▼ |
Publication Type
Reports - Research | 7 |
Journal Articles | 6 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Guo, Wenjing; Choi, Youn-Jeng – Educational and Psychological Measurement, 2023
Determining the number of dimensions is extremely important in applying item response theory (IRT) models to data. Traditional and revised parallel analyses have been proposed within the factor analysis framework, and both have shown some promise in assessing dimensionality. However, their performance in the IRT framework has not been…
Descriptors: Item Response Theory, Evaluation Methods, Factor Analysis, Guidelines
Wang, Shaojie; Zhang, Minqiang; Lee, Won-Chan; Huang, Feifei; Li, Zonglong; Li, Yixing; Yu, Sufang – Journal of Educational Measurement, 2022
Traditional IRT characteristic curve linking methods ignore parameter estimation errors, which may undermine the accuracy of estimated linking constants. Two new linking methods are proposed that take into account parameter estimation errors. The item- (IWCC) and test-information-weighted characteristic curve (TWCC) methods employ weighting…
Descriptors: Item Response Theory, Error of Measurement, Accuracy, Monte Carlo Methods
Kárász, Judit T.; Széll, Krisztián; Takács, Szabolcs – Quality Assurance in Education: An International Perspective, 2023
Purpose: Based on the general formula, which depends on the length and difficulty of the test, the number of respondents and the number of ability levels, this study aims to provide a closed formula for the adaptive tests with medium difficulty (probability of solution is p = 1/2) to determine the accuracy of the parameters for each item and in…
Descriptors: Test Length, Probability, Comparative Analysis, Difficulty Level
Koziol, Natalie A.; Goodrich, J. Marc; Yoon, HyeonJin – Educational and Psychological Measurement, 2022
Differential item functioning (DIF) is often used to examine validity evidence of alternate form test accommodations. Unfortunately, traditional approaches for evaluating DIF are prone to selection bias. This article proposes a novel DIF framework that capitalizes on regression discontinuity design analysis to control for selection bias. A…
Descriptors: Regression (Statistics), Item Analysis, Validity, Testing Accommodations
Andersson, Björn – Journal of Educational Measurement, 2016
In observed-score equipercentile equating, the goal is to make scores on two scales or tests measuring the same construct comparable by matching the percentiles of the respective score distributions. If the tests consist of different items with multiple categories for each item, a suitable model for the responses is a polytomous item response…
Descriptors: Equated Scores, Item Response Theory, Error of Measurement, Tests
Walker, Cindy M.; Zhang, Bo; Banks, Kathleen; Cappaert, Kevin – Educational and Psychological Measurement, 2012
The purpose of this simulation study was to establish general effect size guidelines for interpreting the results of differential bundle functioning (DBF) analyses using simultaneous item bias test (SIBTEST). Three factors were manipulated: number of items in a bundle, test length, and magnitude of uniform differential item functioning (DIF)…
Descriptors: Test Bias, Test Length, Simulation, Guidelines
Novick, Melvin R.; Lewis, Charles – 1974
In a program of Individually Prescribed Instruction (IPI), where a student's progress through each level of a program of study is governed by his performance on a test dealing with individual behavioral objectives, there is considerable value in keeping the number of items on each test at a minimum. The specified test length for each objective…
Descriptors: Behavioral Objectives, Criterion Referenced Tests, Cutting Scores, Elementary Secondary Education