Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 10 |
Descriptor
Statistical Analysis | 11 |
Models | 10 |
Foreign Countries | 6 |
Psychometrics | 5 |
Goodness of Fit | 4 |
Classification | 3 |
Item Response Theory | 3 |
Regression (Statistics) | 3 |
Scores | 3 |
Test Bias | 3 |
Test Items | 3 |
More ▼ |
Source
International Journal of… | 11 |
Author
Publication Type
Journal Articles | 11 |
Reports - Research | 7 |
Reports - Descriptive | 2 |
Reports - Evaluative | 2 |
Guides - Non-Classroom | 1 |
Education Level
Higher Education | 2 |
Postsecondary Education | 2 |
Elementary Education | 1 |
Elementary Secondary Education | 1 |
Grade 4 | 1 |
High Schools | 1 |
Secondary Education | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
NEO Personality Inventory | 1 |
Self Description Questionnaire | 1 |
What Works Clearinghouse Rating
Hu, Jinxiang; Miller, M. David; Huggins-Manley, Anne Corinne; Chen, Yi-Hsin – International Journal of Testing, 2016
Cognitive diagnosis models (CDMs) estimate student ability profiles using latent attributes. Model fit to the data needs to be ascertained in order to determine whether inferences from CDMs are valid. This study investigated the usefulness of some popular model fit statistics to detect CDM fit including relative fit indices (AIC, BIC, and CAIC),…
Descriptors: Models, Goodness of Fit, Psychometrics, Ability
Tay, Louis; Vermunt, Jeroen K.; Wang, Chun – International Journal of Testing, 2013
We evaluate the item response theory with covariates (IRT-C) procedure for assessing differential item functioning (DIF) without preknowledge of anchor items (Tay, Newman, & Vermunt, 2011). This procedure begins with a fully constrained baseline model, and candidate items are tested for uniform and/or nonuniform DIF using the Wald statistic.…
Descriptors: Item Response Theory, Test Bias, Models, Statistical Analysis
Kunina-Habenicht, Olga; Rupp, André A.; Wilhelm, Oliver – International Journal of Testing, 2017
Diagnostic classification models (DCMs) hold great potential for applications in summative and formative assessment by providing discrete multivariate proficiency scores that yield statistically driven classifications of students. Using data from a newly developed diagnostic arithmetic assessment that was administered to 2032 fourth-grade students…
Descriptors: Grade 4, Foreign Countries, Classification, Mathematics Tests
Kajonius, Petri J. – International Journal of Testing, 2017
Research is currently testing how the new maladaptive personality inventory for DSM (PID-5) and the well-established common Five-Factor Model (FFM) together can serve as an empirical and theoretical foundation for clinical psychology. The present study investigated the official short version of the PID-5 together with a common short version of…
Descriptors: Foreign Countries, Personality Measures, Personality Traits, Clinical Diagnosis
Jurich, Daniel P.; Bradshaw, Laine P. – International Journal of Testing, 2014
The assessment of higher-education student learning outcomes is an important component in understanding the strengths and weaknesses of academic and general education programs. This study illustrates the application of diagnostic classification models, a burgeoning set of statistical models, in assessing student learning outcomes. To facilitate…
Descriptors: College Outcomes Assessment, Classification, Statistical Analysis, Models
Lee, HyeSun; Geisinger, Kurt F. – International Journal of Testing, 2014
Differential item functioning (DIF) analysis is important in terms of test fairness. While DIF analyses have mainly been conducted with manifest grouping variables, such as gender or race/ethnicity, it has been recently claimed that not only the grouping variables but also contextual variables pertaining to examinees should be considered in DIF…
Descriptors: Test Bias, Gender Differences, Regression (Statistics), Statistical Analysis
Ong, Yoke Mooi; Williams, Julian; Lamprianou, Iasonas – International Journal of Testing, 2015
The purpose of this article is to explore crossing differential item functioning (DIF) in a test drawn from a national examination of mathematics for 11-year-old pupils in England. An empirical dataset was analyzed to explore DIF by gender in a mathematics assessment. A two-step process involving the logistic regression (LR) procedure for…
Descriptors: Mathematics Tests, Gender Differences, Test Bias, Test Items
Gierl, Mark J.; Lai, Hollis – International Journal of Testing, 2012
Automatic item generation represents a relatively new but rapidly evolving research area where cognitive and psychometric theories are used to produce tests that include items generated using computer technology. Automatic item generation requires two steps. First, test development specialists create item models, which are comparable to templates…
Descriptors: Foreign Countries, Psychometrics, Test Construction, Test Items
Rupp, Andre A. – International Journal of Testing, 2007
One of the most revolutionary advances in psychometric research during the last decades has been the systematic development of statistical models that allow for cognitive psychometric research (CPR) to be conducted. Many of the models currently available for such purposes are extensions of basic latent variable models in item response theory…
Descriptors: Psychometrics, Research, Models, Item Response Theory
Bodkin-Andrews, Gawaian H.; Ha, My Trinh; Craven, Rhonda G.; Yeung, Alexander Seesing – International Journal of Testing, 2010
This investigation reports on the cross-cultural equivalence testing of the Self-Description Questionnaire II (short version; SDQII-S) for Indigenous and non-Indigenous Australian secondary student samples. A variety of statistical analysis techniques were employed to assess the psychometric properties of the SDQII-S for both the Indigenous and…
Descriptors: Indigenous Populations, Disadvantaged, Testing, Measures (Individuals)
Kubinger, Klaus D. – International Journal of Testing, 2005
In this article, we emphasize that the Rasch model is not only very useful for psychological test calibration but is also necessary if the number of solved items is to be used as an examinee's score. Simplified proof that the Rasch model implies specific objective parameter comparisons is given. Consequently, a model check per se is possible. For…
Descriptors: Psychometrics, Psychological Testing, Item Banks, Item Response Theory