Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 6 |
Descriptor
Item Response Theory | 6 |
Evaluation Criteria | 4 |
Models | 3 |
Simulation | 3 |
Accuracy | 2 |
Item Analysis | 2 |
Rating Scales | 2 |
Validity | 2 |
Ability | 1 |
Admission Criteria | 1 |
Algorithms | 1 |
More ▼ |
Source
Educational and Psychological… | 6 |
Author
A. Corinne Huggins-Manley | 1 |
Chang, Wanchen | 1 |
Dodd, Barbara G. | 1 |
Dubravka Svetina Valdivia | 1 |
Eric A. Wright | 1 |
Jane Rogers, H. | 1 |
Kyllonen, Patrick | 1 |
Ling, Guangming | 1 |
Liu, Ou Lydia | 1 |
Liu, Xiaowen | 1 |
M. David Miller | 1 |
More ▼ |
Publication Type
Journal Articles | 6 |
Reports - Research | 4 |
Reports - Evaluative | 2 |
Education Level
Elementary Education | 1 |
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Location
Germany | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Sijia Huang; Dubravka Svetina Valdivia – Educational and Psychological Measurement, 2024
Identifying items with differential item functioning (DIF) in an assessment is a crucial step for achieving equitable measurement. One critical issue that has not been fully addressed with existing studies is how DIF items can be detected when data are multilevel. In the present study, we introduced a Lord's Wald X[superscript 2] test-based…
Descriptors: Item Analysis, Item Response Theory, Algorithms, Accuracy
Liu, Xiaowen; Jane Rogers, H. – Educational and Psychological Measurement, 2022
Test fairness is critical to the validity of group comparisons involving gender, ethnicities, culture, or treatment conditions. Detection of differential item functioning (DIF) is one component of efforts to ensure test fairness. The current study compared four treatments for items that have been identified as showing DIF: deleting, ignoring,…
Descriptors: Item Analysis, Comparative Analysis, Culture Fair Tests, Test Validity
Ziying Li; A. Corinne Huggins-Manley; Walter L. Leite; M. David Miller; Eric A. Wright – Educational and Psychological Measurement, 2022
The unstructured multiple-attempt (MA) item response data in virtual learning environments (VLEs) are often from student-selected assessment data sets, which include missing data, single-attempt responses, multiple-attempt responses, and unknown growth ability across attempts, leading to a complex and complicated scenario for using this kind of…
Descriptors: Sequential Approach, Item Response Theory, Data, Simulation
Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G. – Educational and Psychological Measurement, 2013
Whittaker, Chang, and Dodd compared the performance of model selection criteria when selecting among mixed-format IRT models and found that the criteria did not perform adequately when selecting the more parameterized models. It was suggested by M. S. Johnson that the problems when selecting the more parameterized models may be because of the low…
Descriptors: Item Response Theory, Models, Selection Criteria, Accuracy
Plieninger, Hansjörg; Meiser, Thorsten – Educational and Psychological Measurement, 2014
Response styles, the tendency to respond to Likert-type items irrespective of content, are a widely known threat to the reliability and validity of self-report measures. However, it is still debated how to measure and control for response styles such as extreme responding. Recently, multiprocess item response theory models have been proposed that…
Descriptors: Validity, Item Response Theory, Rating Scales, Models
Liu, Ou Lydia; Minsky, Jennifer; Ling, Guangming; Kyllonen, Patrick – Educational and Psychological Measurement, 2009
In an effort to standardize academic application procedures, the authors developed the Standardized Letters of Recommendation (SLR) to capture important cognitive and noncognitive qualities of graduate school candidates. The SLR, which consists of seven scales, is applied to an intern-selection scenario. Both professor ratings (n = 414) during the…
Descriptors: Rating Scales, Reliability, Validity, Item Response Theory