Publication Date
In 2025 | 1 |
Since 2024 | 5 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 12 |
Since 2006 (last 20 years) | 27 |
Descriptor
Source
Journal of Educational and… | 42 |
Author
Publication Type
Journal Articles | 42 |
Reports - Evaluative | 42 |
Speeches/Meeting Papers | 2 |
Reports - Descriptive | 1 |
Education Level
Elementary Education | 1 |
Grade 1 | 1 |
Middle Schools | 1 |
Audience
Teachers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Armed Services Vocational… | 2 |
National Assessment of… | 2 |
Hopkins Symptom Checklist | 1 |
Program for International… | 1 |
What Works Clearinghouse Rating
James E. Pustejovsky; Man Chen – Journal of Educational and Behavioral Statistics, 2024
Meta-analyses of educational research findings frequently involve statistically dependent effect size estimates. Meta-analysts have often addressed dependence issues using ad hoc approaches that involve modifying the data to conform to the assumptions of models for independent effect size estimates, such as by aggregating estimates to obtain one…
Descriptors: Meta Analysis, Multivariate Analysis, Effect Size, Evaluation Methods
William R. Dardick; Jeffrey R. Harring – Journal of Educational and Behavioral Statistics, 2025
Simulation studies are the basic tools of quantitative methodologists used to obtain empirical solutions to statistical problems that may be impossible to derive through direct mathematical computations. The successful execution of many simulation studies relies on the accurate generation of correlated multivariate data that adhere to a particular…
Descriptors: Statistics, Statistics Education, Problem Solving, Multivariate Analysis
Sang-June Park; Youjae Yi – Journal of Educational and Behavioral Statistics, 2024
Previous research explicates ordinal and disordinal interactions through the concept of the "crossover point." This point is determined via simple regression models of a focal predictor at specific moderator values and signifies the intersection of these models. An interaction effect is labeled as disordinal (or ordinal) when the…
Descriptors: Interaction, Predictor Variables, Causal Models, Mathematical Models
Zachary K. Collier; Minji Kong; Olushola Soyoye; Kamal Chawla; Ann M. Aviles; Yasser Payne – Journal of Educational and Behavioral Statistics, 2024
Asymmetric Likert-type items in research studies can present several challenges in data analysis, particularly concerning missing data. These items are often characterized by a skewed scaling, where either there is no neutral response option or an unequal number of possible positive and negative responses. The use of conventional techniques, such…
Descriptors: Likert Scales, Test Items, Item Analysis, Evaluation Methods
Daniel Koretz – Journal of Educational and Behavioral Statistics, 2024
A critically important balance in educational measurement between practical concerns and matters of technique has atrophied in recent decades, and as a result, some important issues in the field have not been adequately addressed. I start with the work of E. F. Lindquist, who exemplified the balance that is now wanting. Lindquist was arguably the…
Descriptors: Educational Assessment, Evaluation Methods, Achievement Tests, Educational History
van der Linden, Wim J. – Journal of Educational and Behavioral Statistics, 2019
Lord's (1980) equity theorem claims observed-score equating to be possible only when two test forms are perfectly reliable or strictly parallel. An analysis of its proof reveals use of an incorrect statistical assumption. The assumption does not invalidate the theorem itself though, which can be shown to follow directly from the discrete nature of…
Descriptors: Equated Scores, Testing Problems, Item Response Theory, Evaluation Methods
van der Linden, Wim J.; Ren, Hao – Journal of Educational and Behavioral Statistics, 2020
The Bayesian way of accounting for the effects of error in the ability and item parameters in adaptive testing is through the joint posterior distribution of all parameters. An optimized Markov chain Monte Carlo algorithm for adaptive testing is presented, which samples this distribution in real time to score the examinee's ability and optimally…
Descriptors: Bayesian Statistics, Adaptive Testing, Error of Measurement, Markov Processes
Park, Soojin; Palardy, Gregory J. – Journal of Educational and Behavioral Statistics, 2020
Estimating the effects of randomized experiments and, by extension, their mediating mechanisms, is often complicated by treatment noncompliance. Two estimation methods for causal mediation in the presence of noncompliance have recently been proposed, the instrumental variable method (IV-mediate) and maximum likelihood method (ML-mediate). However,…
Descriptors: Computation, Compliance (Psychology), Maximum Likelihood Statistics, Statistical Analysis
Monroe, Scott – Journal of Educational and Behavioral Statistics, 2019
In item response theory (IRT) modeling, the Fisher information matrix is used for numerous inferential procedures such as estimating parameter standard errors, constructing test statistics, and facilitating test scoring. In principal, these procedures may be carried out using either the expected information or the observed information. However, in…
Descriptors: Item Response Theory, Error of Measurement, Scoring, Inferences
Dai, Shenghai; Svetina, Dubravka; Wang, Xiaolin – Journal of Educational and Behavioral Statistics, 2017
There is an increasing interest in reporting test subscores for diagnostic purposes. In this article, we review nine popular R packages (subscore, mirt, TAM, sirt, CDM, NPCD, lavaan, sem, and OpenMX) that are capable of implementing subscore-reporting methods within one or more frameworks including classical test theory, multidimensional item…
Descriptors: Diagnostic Tests, Scores, Computer Software, Item Response Theory
Reardon, Sean F.; Shear, Benjamin R.; Castellano, Katherine E.; Ho, Andrew D. – Journal of Educational and Behavioral Statistics, 2017
Test score distributions of schools or demographic groups are often summarized by frequencies of students scoring in a small number of ordered proficiency categories. We show that heteroskedastic ordered probit (HETOP) models can be used to estimate means and standard deviations of multiple groups' test score distributions from such data. Because…
Descriptors: Scores, Statistical Analysis, Models, Computation
McCoach, D. Betsy; Rifenbark, Graham G.; Newton, Sarah D.; Li, Xiaoran; Kooken, Janice; Yomtov, Dani; Gambino, Anthony J.; Bellara, Aarti – Journal of Educational and Behavioral Statistics, 2018
This study compared five common multilevel software packages via Monte Carlo simulation: HLM 7, M"plus" 7.4, R (lme4 V1.1-12), Stata 14.1, and SAS 9.4 to determine how the programs differ in estimation accuracy and speed, as well as convergence, when modeling multiple randomly varying slopes of different magnitudes. Simulated data…
Descriptors: Hierarchical Linear Modeling, Computer Software, Comparative Analysis, Monte Carlo Methods
Gu, Fei; Preacher, Kristopher J.; Ferrer, Emilio – Journal of Educational and Behavioral Statistics, 2014
Mediation is a causal process that evolves over time. Thus, a study of mediation requires data collected throughout the process. However, most applications of mediation analysis use cross-sectional rather than longitudinal data. Another implicit assumption commonly made in longitudinal designs for mediation analysis is that the same mediation…
Descriptors: Statistical Analysis, Models, Research Design, Case Studies
Wagler, Amy E. – Journal of Educational and Behavioral Statistics, 2014
Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…
Descriptors: Hierarchical Linear Modeling, Cluster Grouping, Heterogeneous Grouping, Monte Carlo Methods
Fan, Weihua; Hancock, Gregory R. – Journal of Educational and Behavioral Statistics, 2012
This study proposes robust means modeling (RMM) approaches for hypothesis testing of mean differences for between-subjects designs in order to control the biasing effects of nonnormality and variance inequality. Drawing from structural equation modeling (SEM), the RMM approaches make no assumption of variance homogeneity and employ robust…
Descriptors: Robustness (Statistics), Hypothesis Testing, Monte Carlo Methods, Simulation