NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
Assessments and Surveys
Program for International…1
What Works Clearinghouse Rating
Showing all 11 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Feuerstahler, Leah; Wilson, Mark – Journal of Educational Measurement, 2019
Scores estimated from multidimensional item response theory (IRT) models are not necessarily comparable across dimensions. In this article, the concept of aligned dimensions is formalized in the context of Rasch models, and two methods are described--delta dimensional alignment (DDA) and logistic regression alignment (LRA)--to transform estimated…
Descriptors: Item Response Theory, Models, Scores, Comparative Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Finch, Holmes – Practical Assessment, Research & Evaluation, 2022
Researchers in many disciplines work with ranking data. This data type is unique in that it is often deterministic in nature (the ranks of items "k"-1 determine the rank of item "k"), and the difference in a pair of rank scores separated by "k" units is equivalent regardless of the actual values of the two ranks in…
Descriptors: Data Analysis, Statistical Inference, Models, College Faculty
Peer reviewed Peer reviewed
Direct linkDirect link
Verhelst, Norman D. – Scandinavian Journal of Educational Research, 2012
When using IRT models in Educational Achievement Testing, the model is as a rule too simple to catch all the relevant dimensions in the test. It is argued that a simple model may nevertheless be useful but that it can be complemented with additional analyses. Such an analysis, called profile analysis, is proposed and applied to the reading data of…
Descriptors: Multidimensional Scaling, Profiles, Item Response Theory, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Sun, Xiang; Allison, Carrie; Auyeung, Bonnie; Matthews, Fiona E.; Norton, Samuel; Baron-Cohen, Simon; Brayne, Carol – Journal of Autism and Developmental Disorders, 2014
Limited studies have investigated the latent autistic traits in the mainland Chinese population for autism spectrum conditions (ASC). This study explored the psychometric properties of a Mandarin Chinese version of the CAST in a sample consisting of 737 children in mainstream schools and 50 autistic cases. A combination of categorical data factor…
Descriptors: Psychometrics, Mandarin Chinese, Autism, Pervasive Developmental Disorders
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Ying; Jiao, Hong; Lissitz, Robert W. – Journal of Applied Testing Technology, 2012
This study investigated the application of multidimensional item response theory (IRT) models to validate test structure and dimensionality. Multiple content areas or domains within a single subject often exist in large-scale achievement tests. Such areas or domains may cause multidimensionality or local item dependence, which both violate the…
Descriptors: Achievement Tests, Science Tests, Item Response Theory, Measures (Individuals)
Peer reviewed Peer reviewed
Direct linkDirect link
Perry, John L.; Nicholls, Adam R.; Clough, Peter J.; Crust, Lee – Measurement in Physical Education and Exercise Science, 2015
Despite the limitations of overgeneralizing cutoff values for confirmatory factor analysis (CFA; e.g., Marsh, Hau, & Wen, 2004), they are still often employed as golden rules for assessing factorial validity in sport and exercise psychology. The purpose of this study was to investigate the appropriateness of using the CFA approach with these…
Descriptors: Factor Analysis, Structural Equation Models, Goodness of Fit, Sport Psychology
Peer reviewed Peer reviewed
Tinsley, Howard E. A.; Dawis, Rene V. – Educational and Psychological Measurement, 1975
Examines the application of the Rasch simple logistic model to analogy test items. (RC)
Descriptors: Correlation, Item Analysis, Measurement Techniques, Models
Reynolds, Thomas J. – 1976
A method of factor extraction specific to a binary matrix, illustrated here as a person-by-item response matrix, is presented. The extraction procedure, termed ERGO, differs from the more commonly implemented dimensionalizing techniques, factor analysis and multidimensional scaling, by taking into consideration item difficulty. Utilized in the…
Descriptors: Discriminant Analysis, Factor Analysis, Item Analysis, Matrices
Peer reviewed Peer reviewed
Direct linkDirect link
Yao, Lihua; Schwarz, Richard D. – Applied Psychological Measurement, 2006
Multidimensional item response theory (IRT) models have been proposed for better understanding the dimensional structure of data or to define diagnostic profiles of student learning. A compensatory multidimensional two-parameter partial credit model (M-2PPC) for constructed-response items is presented that is a generalization of those proposed to…
Descriptors: Models, Item Response Theory, Markov Processes, Monte Carlo Methods
Secolsky, Charles, Ed.; Denison, D. Brian, Ed. – Routledge, Taylor & Francis Group, 2011
Increased demands for colleges and universities to engage in outcomes assessment for accountability purposes have accelerated the need to bridge the gap between higher education practice and the fields of measurement, assessment, and evaluation. The "Handbook on Measurement, Assessment, and Evaluation in Higher Education" provides higher…
Descriptors: Generalizability Theory, Higher Education, Institutional Advancement, Teacher Effectiveness
Abdel-fattah, Abdel-fattah A. – 1992
A scaling procedure is proposed, based on item response theory (IRT), to fit non-hierarchical test structure as well. The binary scores of a test of English were used for calculating the probabilities of answering each item correctly. The probability matrix was factor analyzed, and the difficulty intervals or estimates corresponding to the factors…
Descriptors: Bayesian Statistics, Difficulty Level, English, Estimation (Mathematics)