NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Daoxuan Fu; Chunying Qin; Zhaosheng Luo; Yujun Li; Xiaofeng Yu; Ziyu Ye – Journal of Educational and Behavioral Statistics, 2025
One of the central components of cognitive diagnostic assessment is the Q-matrix, which is an essential loading indicator matrix and is typically constructed by subject matter experts. Nonetheless, to a large extent, the construction of Q-matrix remains a subjective process and might lead to misspecifications. Many researchers have recognized the…
Descriptors: Q Methodology, Matrices, Diagnostic Tests, Cognitive Measurement
Sales, Adam C.; Hansen, Ben B.; Rowan, Brian – Journal of Educational and Behavioral Statistics, 2018
In causal matching designs, some control subjects are often left unmatched, and some covariates are often left unmodeled. This article introduces "rebar," a method using high-dimensional modeling to incorporate these commonly discarded data without sacrificing the integrity of the matching design. After constructing a match, a researcher…
Descriptors: Computation, Prediction, Models, Data
Peer reviewed Peer reviewed
Direct linkDirect link
von Davier, Matthias; Khorramdel, Lale; He, Qiwei; Shin, Hyo Jeong; Chen, Haiwen – Journal of Educational and Behavioral Statistics, 2019
International large-scale assessments (ILSAs) transitioned from paper-based assessments to computer-based assessments (CBAs) facilitating the use of new item types and more effective data collection tools. This allows implementation of more complex test designs and to collect process and response time (RT) data. These new data types can be used to…
Descriptors: International Assessment, Computer Assisted Testing, Psychometrics, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Stapleton, Laura M.; Yang, Ji Seung; Hancock, Gregory R. – Journal of Educational and Behavioral Statistics, 2016
We present types of constructs, individual- and cluster-level, and their confirmatory factor analytic validation models when data are from individuals nested within clusters. When a construct is theoretically individual level, spurious construct-irrelevant dependency in the data may appear to signal cluster-level dependency; in such cases,…
Descriptors: Multivariate Analysis, Factor Analysis, Validity, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Johnson, Timothy R.; Bolt, Daniel M. – Journal of Educational and Behavioral Statistics, 2010
Multidimensional item response models are usually implemented to model the relationship between item responses and two or more traits of interest. We show how multidimensional multinomial logit item response models can also be used to account for individual differences in response style. This is done by specifying a factor-analytic model for…
Descriptors: Models, Response Style (Tests), Factor Structure, Individual Differences
Peer reviewed Peer reviewed
Direct linkDirect link
Allen, Jeff; Le, Huy – Journal of Educational and Behavioral Statistics, 2008
Users of logistic regression models often need to describe the overall predictive strength, or effect size, of the model's predictors. Analogs of R[superscript 2] have been developed, but none of these measures are interpretable on the same scale as effects of individual predictors. Furthermore, R[superscript 2] analogs are not invariant to the…
Descriptors: Regression (Statistics), Effect Size, Measurement, Models
Peer reviewed Peer reviewed
Bradlow, Eric T.; Thomas, Neal – Journal of Educational and Behavioral Statistics, 1998
A set of conditions is presented for the validity of inference for Item Response Theory (IRT) models applied to data collected from examinations that allow students to choose a subset of items. Common low-dimensional IRT models estimated by standard methods do not resolve the difficult problems posed by choice-based data. (SLD)
Descriptors: Inferences, Item Response Theory, Models, Selection
Peer reviewed Peer reviewed
Direct linkDirect link
Wiberg, Marie – Journal of Educational and Behavioral Statistics, 2003
A criterion-referenced computerized test is expressed as a statistical hypothesis problem. This admits that it can be studied by using the theory of optimal design. The power function of the statistical test is used as a criterion function when designing the test. A formal proof is provided showing that all items should have the same item…
Descriptors: Test Items, Computer Assisted Testing, Statistics, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
van der Linden, Wim J. – Journal of Educational and Behavioral Statistics, 2006
A lognormal model for the response times of a person on a set of test items is investigated. The model has a parameter structure analogous to the two-parameter logistic response models in item response theory, with a parameter for the speed of each person as well as parameters for the time intensity and discriminating power of each item. It is…
Descriptors: Test Items, Vocational Aptitude, Reaction Time, Markov Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Martineau, Joseph A. – Journal of Educational and Behavioral Statistics, 2006
Longitudinal, student performance-based, value-added accountability models have become popular of late and continue to enjoy increasing popularity. Such models require student data to be vertically scaled across wide grade and developmental ranges so that the value added to student growth/achievement by teachers, schools, and districts may be…
Descriptors: Longitudinal Studies, Academic Achievement, Accountability, Models