NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Journal of Educational and…89
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 89 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kazuhiro Yamaguchi – Journal of Educational and Behavioral Statistics, 2025
This study proposes a Bayesian method for diagnostic classification models (DCMs) for a partially known Q-matrix setting between exploratory and confirmatory DCMs. This Q-matrix setting is practical and useful because test experts have pre-knowledge of the Q-matrix but cannot readily specify it completely. The proposed method employs priors for…
Descriptors: Models, Classification, Bayesian Statistics, Evaluation Methods
Joo, Seang-Hwane; Wang, Yan; Ferron, John; Beretvas, S. Natasha; Moeyaert, Mariola; Van Den Noortgate, Wim – Journal of Educational and Behavioral Statistics, 2022
Multiple baseline (MB) designs are becoming more prevalent in educational and behavioral research, and as they do, there is growing interest in combining effect size estimates across studies. To further refine the meta-analytic methods of estimating the effect, this study developed and compared eight alternative methods of estimating intervention…
Descriptors: Meta Analysis, Effect Size, Computation, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Joakim Wallmark; James O. Ramsay; Juan Li; Marie Wiberg – Journal of Educational and Behavioral Statistics, 2024
Item response theory (IRT) models the relationship between the possible scores on a test item against a test taker's attainment of the latent trait that the item is intended to measure. In this study, we compare two models for tests with polytomously scored items: the optimal scoring (OS) model, a nonparametric IRT model based on the principles of…
Descriptors: Item Response Theory, Test Items, Models, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Lei Guo; Wenjie Zhou; Xiao Li – Journal of Educational and Behavioral Statistics, 2024
The testlet design is very popular in educational and psychological assessments. This article proposes a new cognitive diagnosis model, the multiple-choice cognitive diagnostic testlet (MC-CDT) model for tests using testlets consisting of MC items. The MC-CDT model uses the original examinees' responses to MC items instead of dichotomously scored…
Descriptors: Multiple Choice Tests, Diagnostic Tests, Accuracy, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Feinberg, Richard A.; von Davier, Matthias – Journal of Educational and Behavioral Statistics, 2020
The literature showing that subscores fail to add value is vast; yet despite their typical redundancy and the frequent presence of substantial statistical errors, many stakeholders remain convinced of their necessity. This article describes a method for identifying and reporting unexpectedly high or low subscores by comparing each examinee's…
Descriptors: Scores, Probability, Statistical Distributions, Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Jordan M. Wheeler; Allan S. Cohen; Shiyu Wang – Journal of Educational and Behavioral Statistics, 2024
Topic models are mathematical and statistical models used to analyze textual data. The objective of topic models is to gain information about the latent semantic space of a set of related textual data. The semantic space of a set of textual data contains the relationship between documents and words and how they are used. Topic models are becoming…
Descriptors: Semantics, Educational Assessment, Evaluators, Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Robitzsch, Alexander; Lüdtke, Oliver – Journal of Educational and Behavioral Statistics, 2022
One of the primary goals of international large-scale assessments in education is the comparison of country means in student achievement. This article introduces a framework for discussing differential item functioning (DIF) for such mean comparisons. We compare three different linking methods: concurrent scaling based on full invariance,…
Descriptors: Test Bias, International Assessment, Scaling, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Crompvoets, Elise A. V.; Béguin, Anton A.; Sijtsma, Klaas – Journal of Educational and Behavioral Statistics, 2020
Pairwise comparison is becoming increasingly popular as a holistic measurement method in education. Unfortunately, many comparisons are required for reliable measurement. To reduce the number of required comparisons, we developed an adaptive selection algorithm (ASA) that selects the most informative comparisons while taking the uncertainty of the…
Descriptors: Comparative Analysis, Statistical Analysis, Mathematics, Measurement
Benjamin Lu; Eli Ben-Michael; Avi Feller; Luke Miratrix – Journal of Educational and Behavioral Statistics, 2023
In multisite trials, learning about treatment effect variation across sites is critical for understanding where and for whom a program works. Unadjusted comparisons, however, capture "compositional" differences in the distributions of unit-level features as well as "contextual" differences in site-level features, including…
Descriptors: Statistical Analysis, Statistical Distributions, Program Implementation, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Schochet, Peter Z. – Journal of Educational and Behavioral Statistics, 2022
This article develops new closed-form variance expressions for power analyses for commonly used difference-in-differences (DID) and comparative interrupted time series (CITS) panel data estimators. The main contribution is to incorporate variation in treatment timing into the analysis. The power formulas also account for other key design features…
Descriptors: Comparative Analysis, Statistical Analysis, Sample Size, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Na Shan; Ping-Feng Xu – Journal of Educational and Behavioral Statistics, 2025
The detection of differential item functioning (DIF) is important in psychological and behavioral sciences. Standard DIF detection methods perform an item-by-item test iteratively, often assuming that all items except the one under investigation are DIF-free. This article proposes a Bayesian adaptive Lasso method to detect DIF in graded response…
Descriptors: Bayesian Statistics, Item Response Theory, Adolescents, Longitudinal Studies
Peer reviewed Peer reviewed
Direct linkDirect link
George Leckie; Richard Parker; Harvey Goldstein; Kate Tilling – Journal of Educational and Behavioral Statistics, 2024
School value-added models are widely applied to study, monitor, and hold schools to account for school differences in student learning. The traditional model is a mixed-effects linear regression of student current achievement on student prior achievement, background characteristics, and a school random intercept effect. The latter is referred to…
Descriptors: Academic Achievement, Value Added Models, Accountability, Institutional Characteristics
Peer reviewed Peer reviewed
Direct linkDirect link
Reagan Mozer; Luke Miratrix; Jackie Eunjung Relyea; James S. Kim – Journal of Educational and Behavioral Statistics, 2024
In a randomized trial that collects text as an outcome, traditional approaches for assessing treatment impact require that each document first be manually coded for constructs of interest by human raters. An impact analysis can then be conducted to compare treatment and control groups, using the hand-coded scores as a measured outcome. This…
Descriptors: Scoring, Evaluation Methods, Writing Evaluation, Comparative Analysis
Wang, Chun; Nydick, Steven W. – Journal of Educational and Behavioral Statistics, 2020
Recent work on measuring growth with categorical outcome variables has combined the item response theory (IRT) measurement model with the latent growth curve model and extended the assessment of growth to multidimensional IRT models and higher order IRT models. However, there is a lack of synthetic studies that clearly evaluate the strength and…
Descriptors: Item Response Theory, Longitudinal Studies, Comparative Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Erps, Ryan C.; Noguchi, Kimihiro – Journal of Educational and Behavioral Statistics, 2020
A new two-sample test for comparing variability measures is proposed. To make the test robust and powerful, a new modified structural zero removal method is applied to the Brown-Forsythe transformation. The t-test-based statistic allows results to be expressed as the ratio of mean absolute deviations from median. Extensive simulation study…
Descriptors: Statistical Analysis, Comparative Analysis, Robustness (Statistics), Sample Size
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6