NotesFAQContact Us
Collection
Advanced
Search Tips
Education Level
Audience
Researchers1
Location
Laws, Policies, & Programs
Assessments and Surveys
Comprehensive Tests of Basic…1
What Works Clearinghouse Rating
Showing 1 to 15 of 16 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
E. Damiano D'Urso; Jesper Tijmstra; Jeroen K. Vermunt; Kim De Roover – Structural Equation Modeling: A Multidisciplinary Journal, 2024
Measurement invariance (MI) is required for validly comparing latent constructs measured by multiple ordinal self-report items. Non-invariances may occur when disregarding (group differences in) an acquiescence response style (ARS; an agreeing tendency regardless of item content). If non-invariance results solely from neglecting ARS, one should…
Descriptors: Error of Measurement, Structural Equation Models, Construct Validity, Measurement Techniques
Xue Zhang; Chun Wang – Grantee Submission, 2022
Item-level fit analysis not only serves as a complementary check to global fit analysis, it is also essential in scale development because the fit results will guide item revision and/or deletion (Liu & Maydeu-Olivares, 2014). During data collection, missing response data may likely happen due to various reasons. Chi-square-based item fit…
Descriptors: Goodness of Fit, Item Response Theory, Scores, Test Length
Peer reviewed Peer reviewed
Direct linkDirect link
Wind, Stefanie A. – Measurement: Interdisciplinary Research and Perspectives, 2020
Rater fit analyses provide insight into the degree to which rater judgments correspond to expected properties, as defined within a measurement framework. Parametric models such as the Rasch model provide a useful framework for evaluating rating quality; however, these models are not appropriate for all assessment contexts. The purpose of this…
Descriptors: Evaluators, Goodness of Fit, Simulation, Psychometrics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Raborn, Anthony W.; Leite, Walter L.; Marcoulides, Katerina M. – International Educational Data Mining Society, 2019
Short forms of psychometric scales have been commonly used in educational and psychological research to reduce the burden of test administration. However, it is challenging to select items for a short form that preserve the validity and reliability of the scores of the original scale. This paper presents and evaluates multiple automated methods…
Descriptors: Psychometrics, Measures (Individuals), Mathematics, Heuristics
Peer reviewed Peer reviewed
Direct linkDirect link
Bradshaw, Laine P.; Madison, Matthew J. – International Journal of Testing, 2016
In item response theory (IRT), the invariance property states that item parameter estimates are independent of the examinee sample, and examinee ability estimates are independent of the test items. While this property has long been established and understood by the measurement community for IRT models, the same cannot be said for diagnostic…
Descriptors: Classification, Models, Simulation, Psychometrics
Leventhal, Brian – ProQuest LLC, 2017
More robust and rigorous psychometric models, such as multidimensional Item Response Theory models, have been advocated for survey applications. However, item responses may be influenced by construct-irrelevant variance factors such as preferences for extreme response options. Through empirical and simulation methods, this study evaluates the use…
Descriptors: Psychometrics, Item Response Theory, Simulation, Models
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Falk, Carl F.; Cai, Li – Grantee Submission, 2014
We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest…
Descriptors: Maximum Likelihood Statistics, Item Response Theory, Computation, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Stanley, Leanne M.; Edwards, Michael C. – Educational and Psychological Measurement, 2016
The purpose of this article is to highlight the distinction between the reliability of test scores and the fit of psychometric measurement models, reminding readers why it is important to consider both when evaluating whether test scores are valid for a proposed interpretation and/or use. It is often the case that an investigator judges both the…
Descriptors: Test Reliability, Goodness of Fit, Scores, Patients
Peer reviewed Peer reviewed
Direct linkDirect link
Roberts, James S. – Applied Psychological Measurement, 2008
Orlando and Thissen (2000) developed an item fit statistic for binary item response theory (IRT) models known as S-X[superscript 2]. This article generalizes their statistic to polytomous unfolding models. Four alternative formulations of S-X[superscript 2] are developed for the generalized graded unfolding model (GGUM). The GGUM is a…
Descriptors: Item Response Theory, Goodness of Fit, Test Items, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Ogasawara, Haruhiko – Psychometrika, 2007
Higher-order approximations to the distributions of fit indexes for structural equation models under fixed alternative hypotheses are obtained in nonnormal samples as well as normal ones. The fit indexes include the normal-theory likelihood ratio chi-square statistic for a posited model, the corresponding statistic for the baseline model of…
Descriptors: Intervals, Structural Equation Models, Goodness of Fit, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Cui, Ying; Leighton, Jacqueline P. – Journal of Educational Measurement, 2009
In this article, we introduce a person-fit statistic called the hierarchy consistency index (HCI) to help detect misfitting item response vectors for tests developed and analyzed based on a cognitive model. The HCI ranges from -1.0 to 1.0, with values close to -1.0 indicating that students respond unexpectedly or differently from the responses…
Descriptors: Test Length, Simulation, Correlation, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Glas, C. A. W.; Dagohoy, Anna Villa T. – Psychometrika, 2007
A person fit test based on the Lagrange multiplier test is presented for three item response theory models for polytomous items: the generalized partial credit model, the sequential model, and the graded response model. The test can also be used in the framework of multidimensional ability parameters. It is shown that the Lagrange multiplier…
Descriptors: Item Response Theory, Goodness of Fit, Psychometrics, Models
Peer reviewed Peer reviewed
Fishburn, Peter C.; Gehrlein, William V. – Psychometrika, 1974
Descriptors: Goodness of Fit, Psychometrics, Sampling, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Brown, Richard S.; Villarreal, Julio C. – International Journal of Testing, 2007
There has been considerable research regarding the extent to which psychometric sound assessments sometimes yield individual score estimates that are inconsistent with the response patterns of the individual. It has been suggested that individual response patterns may differ from expectations for a number of reasons, including subject motivation,…
Descriptors: Psychometrics, Test Bias, Testing, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L.; DeMars, Christine E. – Journal of Educational Measurement, 2006
The validity of inferences based on achievement test scores is dependent on the amount of effort that examinees put forth while taking the test. With low-stakes tests, for which this problem is particularly prevalent, there is a consequent need for psychometric models that can take into account differing levels of examinee effort. This article…
Descriptors: Guessing (Tests), Psychometrics, Inferences, Reaction Time
Previous Page | Next Page ยป
Pages: 1  |  2