Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 5 |
Descriptor
Difficulty Level | 10 |
Simulation | 10 |
Test Length | 10 |
Test Items | 8 |
Item Response Theory | 6 |
Sample Size | 5 |
Item Analysis | 4 |
Adaptive Testing | 3 |
Comparative Analysis | 3 |
Computer Assisted Testing | 3 |
Equated Scores | 3 |
More ▼ |
Source
ProQuest LLC | 3 |
Applied Psychological… | 1 |
ETS Research Report Series | 1 |
Journal of Applied Measurement | 1 |
Quality Assurance in… | 1 |
Author
Beyerlein, Michael M. | 1 |
Cliff, Norman | 1 |
Cook, Linda L. | 1 |
Derek Sauder | 1 |
Dorans, Neil J. | 1 |
Guo, Hongwen | 1 |
Hambleton, Ronald K. | 1 |
Harris, Dickie A. | 1 |
Kárász, Judit T. | 1 |
Lee, Eunjung | 1 |
Lu, Ru | 1 |
More ▼ |
Publication Type
Reports - Research | 6 |
Journal Articles | 4 |
Dissertations/Theses -… | 3 |
Reports - Evaluative | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Stanford Binet Intelligence… | 1 |
What Works Clearinghouse Rating
Derek Sauder – ProQuest LLC, 2020
The Rasch model is commonly used to calibrate multiple choice items. However, the sample sizes needed to estimate the Rasch model can be difficult to attain (e.g., consider a small testing company trying to pretest new items). With small sample sizes, auxiliary information besides the item responses may improve estimation of the item parameters.…
Descriptors: Item Response Theory, Sample Size, Computation, Test Length
Kárász, Judit T.; Széll, Krisztián; Takács, Szabolcs – Quality Assurance in Education: An International Perspective, 2023
Purpose: Based on the general formula, which depends on the length and difficulty of the test, the number of respondents and the number of ability levels, this study aims to provide a closed formula for the adaptive tests with medium difficulty (probability of solution is p = 1/2) to determine the accuracy of the parameters for each item and in…
Descriptors: Test Length, Probability, Comparative Analysis, Difficulty Level
Lu, Ru; Guo, Hongwen; Dorans, Neil J. – ETS Research Report Series, 2021
Two families of analysis methods can be used for differential item functioning (DIF) analysis. One family is DIF analysis based on observed scores, such as the Mantel-Haenszel (MH) and the standardized proportion-correct metric for DIF procedures; the other is analysis based on latent ability, in which the statistic is a measure of departure from…
Descriptors: Robustness (Statistics), Weighted Scores, Test Items, Item Analysis
Lee, Eunjung – ProQuest LLC, 2013
The purpose of this research was to compare the equating performance of various equating procedures for the multidimensional tests. To examine the various equating procedures, simulated data sets were used that were generated based on a multidimensional item response theory (MIRT) framework. Various equating procedures were examined, including…
Descriptors: Equated Scores, Tests, Comparative Analysis, Item Response Theory
Sunnassee, Devdass – ProQuest LLC, 2011
Small sample equating remains a largely unexplored area of research. This study attempts to fill in some of the research gaps via a large-scale, IRT-based simulation study that evaluates the performance of seven small-sample equating methods under various test characteristic and sampling conditions. The equating methods considered are typically…
Descriptors: Test Length, Test Format, Sample Size, Simulation

Sunathong, Surintorn; Schumacker, Randall E.; Beyerlein, Michael M. – Journal of Applied Measurement, 2000
Studied five factors that can affect the equating of scores from two tests onto a common score scale through the simulation and equating of 4,860 item data sets. Findings indicate three statistically significant two-way interactions for common item length and test length, item difficulty standard deviation and item distribution type, and item…
Descriptors: Difficulty Level, Equated Scores, Interaction, Item Response Theory

Meijer, Rob R.; And Others – Applied Psychological Measurement, 1994
The power of the nonparametric person-fit statistic, U3, is investigated through simulations as a function of item characteristics, test characteristics, person characteristics, and the group to which examinees belong. Results suggest conditions under which relatively short tests can be used for person-fit analysis. (SLD)
Descriptors: Difficulty Level, Group Membership, Item Response Theory, Nonparametric Statistics
Harris, Dickie A.; Penell, Roger J. – 1977
This study used a series of simulations to answer questions about the efficacy of adaptive testing raised by empirical studies. The first study showed that for reasonable high entry points, parameters estimated from paper-and-pencil test protocols cross-validated remarkably well to groups actually tested at a computer terminal. This suggested that…
Descriptors: Adaptive Testing, Computer Assisted Testing, Cost Effectiveness, Difficulty Level
Hambleton, Ronald K.; Cook, Linda L. – 1978
The purpose of the present research was to study, systematically, the "goodness-of-fit" of the one-, two-, and three-parameter logistic models. We studied, using computer-simulated test data, the effects of four variables: variation in item discrimination parameters, the average value of the pseudo-chance level parameters, test length,…
Descriptors: Career Development, Difficulty Level, Goodness of Fit, Item Analysis
Cliff, Norman; And Others – 1977
TAILOR is a computer program that uses the implied orders concept as the basis for computerized adaptive testing. The basic characteristics of TAILOR, which does not involve pretesting, are reviewed here and two studies of it are reported. One is a Monte Carlo simulation based on the four-parameter Birnbaum model and the other uses a matrix of…
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Programs, Difficulty Level