Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 7 |
Descriptor
Error of Measurement | 8 |
Simulation | 8 |
Testing | 8 |
Item Response Theory | 5 |
Evaluation Methods | 4 |
Test Items | 4 |
Difficulty Level | 3 |
Sample Size | 3 |
Statistical Analysis | 3 |
Test Bias | 3 |
Item Analysis | 2 |
More ▼ |
Source
Applied Psychological… | 2 |
ETS Research Report Series | 1 |
Educational and Psychological… | 1 |
Journal of Educational… | 1 |
Practical Assessment,… | 1 |
Research Synthesis Methods | 1 |
Author
Woods, Carol M. | 2 |
Algina, James | 1 |
Dorans, Neil J. | 1 |
Dusseldorp, Elise | 1 |
Guo, Hongwen | 1 |
Inga Laukaityte | 1 |
Kim, Jihye | 1 |
Li, Jie | 1 |
Li, Xinru | 1 |
Lu, Ru | 1 |
Marie Wiberg | 1 |
More ▼ |
Publication Type
Journal Articles | 7 |
Reports - Research | 7 |
Reports - Evaluative | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Practitioners | 1 |
Researchers | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Inga Laukaityte; Marie Wiberg – Practical Assessment, Research & Evaluation, 2024
The overall aim was to examine effects of differences in group ability and features of the anchor test form on equating bias and the standard error of equating (SEE) using both real and simulated data. Chained kernel equating, Postratification kernel equating, and Circle-arc equating were studied. A college admissions test with four different…
Descriptors: Ability Grouping, Test Items, College Entrance Examinations, High Stakes Tests
Li, Xinru; Dusseldorp, Elise; Meulman, Jacqueline J. – Research Synthesis Methods, 2019
In meta-analytic studies, there are often multiple moderators available (eg, study characteristics). In such cases, traditional meta-analysis methods often lack sufficient power to investigate interaction effects between moderators, especially high-order interactions. To overcome this problem, meta-CART was proposed: an approach that applies…
Descriptors: Correlation, Meta Analysis, Identification, Testing
Lu, Ru; Guo, Hongwen; Dorans, Neil J. – ETS Research Report Series, 2021
Two families of analysis methods can be used for differential item functioning (DIF) analysis. One family is DIF analysis based on observed scores, such as the Mantel-Haenszel (MH) and the standardized proportion-correct metric for DIF procedures; the other is analysis based on latent ability, in which the statistic is a measure of departure from…
Descriptors: Robustness (Statistics), Weighted Scores, Test Items, Item Analysis
Zhang, Jinming; Li, Jie – Journal of Educational Measurement, 2016
An IRT-based sequential procedure is developed to monitor items for enhancing test security. The procedure uses a series of statistical hypothesis tests to examine whether the statistical characteristics of each item under inspection have changed significantly during CAT administration. This procedure is compared with a previously developed…
Descriptors: Computer Assisted Testing, Test Items, Difficulty Level, Item Response Theory
Kim, Jihye; Oshima, T. C. – Educational and Psychological Measurement, 2013
In a typical differential item functioning (DIF) analysis, a significance test is conducted for each item. As a test consists of multiple items, such multiple testing may increase the possibility of making a Type I error at least once. The goal of this study was to investigate how to control a Type I error rate and power using adjustment…
Descriptors: Test Bias, Test Items, Statistical Analysis, Error of Measurement
Woods, Carol M. – Applied Psychological Measurement, 2011
Differential item functioning (DIF) occurs when an item on a test, questionnaire, or interview has different measurement properties for one group of people versus another, irrespective of true group-mean differences on the constructs being measured. This article is focused on item response theory based likelihood ratio testing for DIF (IRT-LR or…
Descriptors: Simulation, Item Response Theory, Testing, Questionnaires
Woods, Carol M. – Applied Psychological Measurement, 2009
Differential item functioning (DIF) occurs when items on a test or questionnaire have different measurement properties for one group of people versus another, irrespective of group-mean differences on the construct. Methods for testing DIF require matching members of different groups on an estimate of the construct. Preferably, the estimate is…
Descriptors: Test Results, Testing, Item Response Theory, Test Bias
Olejnik, Stephen F.; Algina, James – 1986
Sampling distributions for ten tests for comparing population variances in a two group design were generated for several combinations of equal and unequal sample sizes, population means, and group variances when distributional forms differed. The ten procedures included: (1) O'Brien's (OB); (2) O'Brien's with adjusted degrees of freedom; (3)…
Descriptors: Error of Measurement, Evaluation Methods, Measurement Techniques, Nonparametric Statistics