NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 9 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kazuhiro Yamaguchi – Journal of Educational and Behavioral Statistics, 2025
This study proposes a Bayesian method for diagnostic classification models (DCMs) for a partially known Q-matrix setting between exploratory and confirmatory DCMs. This Q-matrix setting is practical and useful because test experts have pre-knowledge of the Q-matrix but cannot readily specify it completely. The proposed method employs priors for…
Descriptors: Models, Classification, Bayesian Statistics, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Schweizer, Karl; Troche, Stefan – Educational and Psychological Measurement, 2018
In confirmatory factor analysis quite similar models of measurement serve the detection of the difficulty factor and the factor due to the item-position effect. The item-position effect refers to the increasing dependency among the responses to successively presented items of a test whereas the difficulty factor is ascribed to the wide range of…
Descriptors: Investigations, Difficulty Level, Factor Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Miciak, Jeremy; Taylor, W. Pat; Stuebing, Karla K.; Fletcher, Jack M. – Journal of Psychoeducational Assessment, 2018
We investigated the classification accuracy of learning disability (LD) identification methods premised on the identification of an intraindividual pattern of processing strengths and weaknesses (PSW) method using multiple indicators for all latent constructs. Known LD status was derived from latent scores; values at the observed level identified…
Descriptors: Accuracy, Learning Disabilities, Classification, Identification
Peer reviewed Peer reviewed
Direct linkDirect link
Zeller, Florian; Krampen, Dorothea; Reiß, Siegbert; Schweizer, Karl – Educational and Psychological Measurement, 2017
The item-position effect describes how an item's position within a test, that is, the number of previous completed items, affects the response to this item. Previously, this effect was represented by constraints reflecting simple courses, for example, a linear increase. Due to the inflexibility of these representations our aim was to examine…
Descriptors: Goodness of Fit, Simulation, Factor Analysis, Intelligence Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Schatschneider, Christopher; Wagner, Richard K.; Hart, Sara A.; Tighe, Elizabeth L. – Scientific Studies of Reading, 2016
The present study employed data simulation techniques to investigate the 1-year stability of alternative classification schemes for identifying children with reading disabilities. Classification schemes investigated include low performance, unexpected low performance, dual-discrepancy, and a rudimentary form of constellation model of reading…
Descriptors: Reading Difficulties, Learning Disabilities, At Risk Students, Disability Identification
Peer reviewed Peer reviewed
Direct linkDirect link
Geerlings, Hanneke; Glas, Cees A. W.; van der Linden, Wim J. – Psychometrika, 2011
An application of a hierarchical IRT model for items in families generated through the application of different combinations of design rules is discussed. Within the families, the items are assumed to differ only in surface features. The parameters of the model are estimated in a Bayesian framework, using a data-augmented Gibbs sampler. An obvious…
Descriptors: Simulation, Intelligence Tests, Item Response Theory, Models
Sampson, Demetrios G., Ed.; Spector, J. Michael, Ed.; Ifenthaler, Dirk, Ed.; Isaias, Pedro, Ed. – International Association for Development of the Information Society, 2014
These proceedings contain the papers of the 11th International Conference on Cognition and Exploratory Learning in the Digital Age (CELDA 2014), October 25-27, 2014, which has been organized by the International Association for Development of the Information Society (IADIS) and endorsed by the Japanese Society for Information and Systems in…
Descriptors: Conference Papers, Teaching Methods, Technological Literacy, Technology Uses in Education
Peer reviewed Peer reviewed
Direct linkDirect link
Ashton, Michael C.; Lee, Kibeom – Intelligence, 2006
Gignac [Gignac, G. E. (2006). "Evaluating subtest "g" saturation levels via the single trait-correlated uniqueness (STCU) SEM approach: Evidence in favor of crystallized subtests as the best indicators of "g"." "Intelligence," 34, 29-46.] used a single-trait correlated uniqueness (STCU) CFA approach to…
Descriptors: Cognitive Ability, Correlation, Intelligence Tests, Simulation
Peer reviewed Peer reviewed
Jensen, Arthur R.; Weng, Li-Jen – Intelligence, 1994
The stability of psychometric "g," the general factor of intelligence, is investigated in simulated correlation matrices and in typical empirical data from a large battery of mental tests. "G" is robust and almost invariant across methods of analysis. A reasonable strategy for estimating "g" is suggested. (SLD)
Descriptors: Correlation, Estimation (Mathematics), Factor Analysis, Intelligence