Publication Date
In 2025 | 1 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 15 |
Since 2016 (last 10 years) | 31 |
Since 2006 (last 20 years) | 65 |
Descriptor
Ability | 236 |
Test Items | 236 |
Item Response Theory | 120 |
Estimation (Mathematics) | 66 |
Adaptive Testing | 60 |
Computer Assisted Testing | 59 |
Simulation | 59 |
Difficulty Level | 55 |
Test Construction | 54 |
Comparative Analysis | 38 |
Scores | 30 |
More ▼ |
Source
Author
Publication Type
Education Level
Elementary Education | 8 |
Secondary Education | 7 |
Junior High Schools | 5 |
Middle Schools | 5 |
Grade 7 | 4 |
Early Childhood Education | 3 |
Grade 3 | 2 |
Grade 4 | 2 |
Grade 8 | 2 |
Higher Education | 2 |
Intermediate Grades | 2 |
More ▼ |
Audience
Researchers | 3 |
Students | 2 |
Practitioners | 1 |
Teachers | 1 |
Location
South Korea | 3 |
China | 1 |
Illinois | 1 |
Indonesia | 1 |
Japan | 1 |
Netherlands | 1 |
New Jersey | 1 |
New York | 1 |
South Carolina | 1 |
Turkey (Ankara) | 1 |
United States | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Jianbin Fu; TsungHan Ho; Xuan Tan – Practical Assessment, Research & Evaluation, 2025
Item parameter estimation using an item response theory (IRT) model with fixed ability estimates is useful in equating with small samples on anchor items. The current study explores the impact of three ability estimation methods (weighted likelihood estimation [WLE], maximum a posteriori [MAP], and posterior ability distribution estimation [PST])…
Descriptors: Item Response Theory, Test Items, Computation, Equated Scores
Hess, Jessica – ProQuest LLC, 2023
This study was conducted to further research into the impact of student-group item parameter drift (SIPD) --referred to as subpopulation item parameter drift in previous research-- on ability estimates and proficiency classification accuracy when occurring in the discrimination parameter of a 2-PL item response theory (IRT) model. Using Monte…
Descriptors: Test Items, Groups, Ability, Item Response Theory
Rios, Joseph – Applied Measurement in Education, 2022
To mitigate the deleterious effects of rapid guessing (RG) on ability estimates, several rescoring procedures have been proposed. Underlying many of these procedures is the assumption that RG is accurately identified. At present, there have been minimal investigations examining the utility of rescoring approaches when RG is misclassified, and…
Descriptors: Accuracy, Guessing (Tests), Scoring, Classification
Pierce, Corey D.; Epstein, Michael H.; Wood, Matthew D. – Journal of Emotional and Behavioral Disorders, 2023
Strength-based assessment has achieved acceptance from educational, mental health, and social service professionals as a means to measuring emotional and behavioral strengths of children. Several standardized, norm-referenced tests have been developed to assess these strengths; however, the primary mode of assessment is via informal interviews of…
Descriptors: Behavior Rating Scales, Content Validity, Psychometrics, Mental Health
Semih Asiret; Seçil Ömür Sünbül – International Journal of Psychology and Educational Studies, 2023
In this study, it was aimed to examine the effect of missing data in different patterns and sizes on test equating methods under the NEAT design for different factors. For this purpose, as part of this study, factors such as sample size, average difficulty level difference between the test forms, difference between the ability distribution,…
Descriptors: Research Problems, Data, Test Items, Equated Scores
Kim, Sooyeon; Walker, Michael E. – Educational Measurement: Issues and Practice, 2022
Test equating requires collecting data to link the scores from different forms of a test. Problems arise when equating samples are not equivalent and the test forms to be linked share no common items by which to measure or adjust for the group nonequivalence. Using data from five operational test forms, we created five pairs of research forms for…
Descriptors: Ability, Tests, Equated Scores, Testing Problems
Stemler, Steven E.; Naples, Adam – Practical Assessment, Research & Evaluation, 2021
When students receive the same score on a test, does that mean they know the same amount about the topic? The answer to this question is more complex than it may first appear. This paper compares classical and modern test theories in terms of how they estimate student ability. Crucial distinctions between the aims of Rasch Measurement and IRT are…
Descriptors: Item Response Theory, Test Theory, Ability, Computation
Raykov, Tenko – Measurement: Interdisciplinary Research and Perspectives, 2023
This software review discusses the capabilities of Stata to conduct item response theory modeling. The commands needed for fitting the popular one-, two-, and three-parameter logistic models are initially discussed. The procedure for testing the discrimination parameter equality in the one-parameter model is then outlined. The commands for fitting…
Descriptors: Item Response Theory, Models, Comparative Analysis, Item Analysis
Cuhadar, Ismail; Binici, Salih – Educational Measurement: Issues and Practice, 2022
This study employs the 4-parameter logistic item response theory model to account for the unexpected incorrect responses or slipping effects observed in a large-scale Algebra 1 End-of-Course assessment, including several innovative item formats. It investigates whether modeling the misfit at the upper asymptote has any practical impact on the…
Descriptors: Item Response Theory, Measurement, Student Evaluation, Algebra
DeMars, Christine E. – Applied Measurement in Education, 2021
Estimation of parameters for the many-facets Rasch model requires that conditional on the values of the facets, such as person ability, item difficulty, and rater severity, the observed responses within each facet are independent. This requirement has often been discussed for the Rasch models and 2PL and 3PL models, but it becomes more complex…
Descriptors: Item Response Theory, Test Items, Ability, Scores
The Effect of Item Pools of Different Strengths on the Test Results of Computerized-Adaptive Testing
Kezer, Fatih – International Journal of Assessment Tools in Education, 2021
Item response theory provides various important advantages for exams carried out or to be carried out digitally. For computerized adaptive tests to be able to make valid and reliable predictions supported by IRT, good quality item pools should be used. This study examines how adaptive test applications vary in item pools which consist of items…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Item Response Theory
Akin-Arikan, Çigdem; Gelbal, Selahattin – Eurasian Journal of Educational Research, 2021
Purpose: This study aims to compare the performances of Item Response Theory (IRT) equating and kernel equating (KE) methods based on equating errors (RMSD) and standard error of equating (SEE) using the anchor item nonequivalent groups design. Method: Within this scope, a set of conditions, including ability distribution, type of anchor items…
Descriptors: Equated Scores, Item Response Theory, Test Items, Statistical Analysis
Bjermo, Jonas; Miller, Frank – Applied Measurement in Education, 2021
In recent years, the interest in measuring growth in student ability in various subjects between different grades in school has increased. Therefore, good precision in the estimated growth is of importance. This paper aims to compare estimation methods and test designs when it comes to precision and bias of the estimated growth of mean ability…
Descriptors: Scaling, Ability, Computation, Test Items
Kim, Hun Ju; Lee, Sung Ja; Kam, Kyung-Yoon – International Journal of Disability, Development and Education, 2023
This study verified validity and reliability of the School Function Assessment (SFA) using Rasch analysis in South Korean school-based occupational therapy sites serving children with intellectual disabilities and others. Participants were 103 elementary school children (grades 1 through 6) with disabilities. Rasch analysis revealed several…
Descriptors: Foreign Countries, Test Validity, Test Reliability, Occupational Therapy
Kim, Seonghoon; Kolen, Michael J. – Applied Measurement in Education, 2019
In applications of item response theory (IRT), fixed parameter calibration (FPC) has been used to estimate the item parameters of a new test form on the existing ability scale of an item pool. The present paper presents an application of FPC to multiple examinee groups test data that are linked to the item pool via anchor items, and investigates…
Descriptors: Item Response Theory, Item Banks, Test Items, Computation