NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
ERIC Number: ED661565
Record Type: Non-Journal
Publication Date: 2022
Pages: 30
Abstractor: As Provided
ISBN: N/A
ISSN: N/A
EISSN: N/A
Available Date: N/A
Predictive Fit Metrics for Item Response Models
Ben Stenhaug; Ben Domingue
Grantee Submission
The fit of an item response model is typically conceptualized as whether a given model could have generated the data. We advocate for an alternative view of fit, "predictive fit", based on the model's ability to predict new data. We derive two predictive fit metrics for item response models that assess how well an estimated item response model (i.e., a data analysis model) fits the data-generating model. These metrics are based on long-run out-of-sample predictive performance (i.e., if the data-generating model produced infinite amounts of data, what is the quality of a data analysis model's predictions on average?). The fundamental difference between these metrics is the definition of out-of-sample on which they are built, which is complicated by item responses being cross-classified within items and persons. Via simulation studies, we show that (1) considering persons to be out-of-sample--as psychometricians often do--preferences more parsimonious models; (2) that when data is generated from a 3PL model, a 3PL data analysis model tends to make better predictions than a 2PL data analysis model with larger sample sizes, lower average ability, and lower average discrimination; and (3) that multidimensional models have better predictive fit when the correlation between ability factors is lower. We discuss implications for cross-validating item response models in practice. [This paper was published in "Applied Psychological Measurement" v46 n2 p136-155 2022.]
Publication Type: Reports - Research
Education Level: N/A
Audience: N/A
Language: English
Sponsor: Institute of Education Sciences (ED)
Authoring Institution: N/A
IES Funded: Yes
Grant or Contract Numbers: R305B140009
Author Affiliations: N/A