Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 12 |
Since 2016 (last 10 years) | 25 |
Since 2006 (last 20 years) | 37 |
Descriptor
Error of Measurement | 69 |
Monte Carlo Methods | 26 |
Sample Size | 18 |
Statistical Analysis | 18 |
Statistical Bias | 17 |
Correlation | 13 |
Regression (Statistics) | 13 |
Effect Size | 12 |
Comparative Analysis | 11 |
Computation | 11 |
Goodness of Fit | 11 |
More ▼ |
Source
Journal of Experimental… | 69 |
Author
Publication Type
Journal Articles | 67 |
Reports - Research | 54 |
Reports - Evaluative | 9 |
Reports - Descriptive | 3 |
Opinion Papers | 2 |
Information Analyses | 1 |
Numerical/Quantitative Data | 1 |
Education Level
Elementary Education | 2 |
Grade 5 | 1 |
Grade 6 | 1 |
Intermediate Grades | 1 |
Audience
Researchers | 1 |
Location
California | 1 |
Israel | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Early Childhood Longitudinal… | 2 |
Big Five Inventory | 1 |
Child Behavior Checklist | 1 |
Iowa Tests of Basic Skills | 1 |
Wechsler Intelligence Scale… | 1 |
What Works Clearinghouse Rating
Hsin-Yun Lee; You-Lin Chen; Li-Jen Weng – Journal of Experimental Education, 2024
The second version of Kaiser's Measure of Sampling Adequacy (MSA[subscript 2]) has been widely applied to assess the factorability of data in psychological research. The MSA[subscript 2] is developed in the population and little is known about its behavior in finite samples. If estimated MSA[subscript 2]s are biased due to sampling errors,…
Descriptors: Error of Measurement, Reliability, Sampling, Statistical Bias
Ting Dai; Yang Du; Jennifer Cromley; Tia Fechter; Frank Nelson – Journal of Experimental Education, 2024
Simple matrix sampling planned missing (SMS PD) design, introduce missing data patterns that lead to covariances between variables that are not jointly observed, and create difficulties for analyses other than mean and variance estimations. Based on prior research, we adopted a new multigroup confirmatory factor analysis (CFA) approach to handle…
Descriptors: Research Problems, Research Design, Data, Matrices
Vispoel, Walter P.; Lee, Hyeryung; Xu, Guanlan; Hong, Hyeri – Journal of Experimental Education, 2023
Although generalizability theory (GT) designs have traditionally been analyzed within an ANOVA framework, identical results can be obtained with structural equation models (SEMs) but extended to represent multiple sources of both systematic and measurement error variance, include estimation methods less likely to produce negative variance…
Descriptors: Generalizability Theory, Structural Equation Models, Programming Languages, Scores
Jamshidi, Laleh; Declercq, Lies; Fernández-Castilla, Belén; Ferron, John M.; Moeyaert, Mariola; Beretvas, S. Natasha; Van den Noortgate, Wim – Journal of Experimental Education, 2021
Previous research found bias in the estimate of the overall fixed effects and variance components using multilevel meta-analyses of standardized single-case data. Therefore, we evaluate two adjustments in an attempt to reduce the bias and improve the statistical properties of the parameter estimates. The results confirm the existence of bias when…
Descriptors: Statistical Bias, Multivariate Analysis, Meta Analysis, Research Design
Fernández-Castilla, Belén; Declercq, Lies; Jamshidi, Laleh; Beretvas, S. Natasha; Onghena, Patrick; Van den Noortgate, Wim – Journal of Experimental Education, 2021
This study explores the performance of classical methods for detecting publication bias--namely, Egger's regression test, Funnel Plot test, Begg's Rank Correlation and Trim and Fill method--in meta-analysis of studies that report multiple effects. Publication bias, outcome reporting bias, and a combination of these were generated. Egger's…
Descriptors: Statistical Bias, Meta Analysis, Publications, Regression (Statistics)
Weiss, Brandi A.; Dardick, William – Journal of Experimental Education, 2021
Classification measures and entropy variants can be used as indicators of model fit for logistic regression. These measures rely on a cut-point, "c," to determine predicted group membership. While recommendations exist for determining the location of the cut-point, these methods are primarily anecdotal. The current study used Monte Carlo…
Descriptors: Cutting Scores, Regression (Statistics), Classification, Monte Carlo Methods
Zhang, Zhonghua – Journal of Experimental Education, 2022
Reporting standard errors of equating has been advocated as a standard practice when conducting test equating. The two most widely applied procedures for standard errors of equating including the bootstrap method and the delta method are either computationally intensive or confined to the derivations of complicated formulas. In the current study,…
Descriptors: Error of Measurement, Item Response Theory, True Scores, Equated Scores
Weiss, Brandi A.; Dardick, William – Journal of Experimental Education, 2020
Researchers are often reluctant to rely on classification rates because a model with favorable classification rates but poor separation may not replicate well. In comparison, entropy captures information about borderline cases unlikely to generalize to the population. In logistic regression, the correctness of predicted group membership is known,…
Descriptors: Classification, Regression (Statistics), Goodness of Fit, Monte Carlo Methods
Liu, Yixing; Thompson, Marilyn S. – Journal of Experimental Education, 2022
A simulation study was conducted to explore the impact of differential item functioning (DIF) on general factor difference estimation for bifactor, ordinal data. Common analysis misspecifications in which the generated bifactor data with DIF were fitted using models with equality constraints on noninvariant item parameters were compared under data…
Descriptors: Comparative Analysis, Item Analysis, Sample Size, Error of Measurement
Baek, Eunkyeng; Luo, Wen; Henri, Maria – Journal of Experimental Education, 2022
It is common to include multiple dependent variables (DVs) in single-case experimental design (SCED) meta-analyses. However, statistical issues associated with multiple DVs in the multilevel modeling approach (i.e., possible dependency of error, heterogeneous treatment effects, and heterogeneous error structures) have not been fully investigated.…
Descriptors: Meta Analysis, Hierarchical Linear Modeling, Comparative Analysis, Statistical Inference
Jia, Yuane; Konold, Timothy – Journal of Experimental Education, 2021
Traditional observed variable multilevel models for evaluating indirect effects are limited by their inability to quantify measurement and sampling error. They are further restricted by being unable to fully separate within- and between-level effects without bias. Doubly latent models reduce these biases by decomposing the observed within-level…
Descriptors: Hierarchical Linear Modeling, Educational Environment, Aggression, Bullying
Nazari, Sanaz; Leite, Walter L.; Huggins-Manley, A. Corinne – Journal of Experimental Education, 2023
The piecewise latent growth models (PWLGMs) can be used to study changes in the growth trajectory of an outcome due to an event or condition, such as exposure to an intervention. When there are multiple outcomes of interest, a researcher may choose to fit a series of PWLGMs or a single parallel-process PWLGM. A comparison of these models is…
Descriptors: Growth Models, Statistical Analysis, Intervention, Comparative Analysis
Joo, Seang-Hwane; Ferron, John M.; Moeyaert, Mariola; Beretvas, S. Natasha; Van den Noortgate, Wim – Journal of Experimental Education, 2019
Multilevel modeling has been utilized for combining single-case experimental design (SCED) data assuming simple level-1 error structures. The purpose of this study is to compare various multilevel analysis approaches for handling potential complexity in the level-1 error structure within SCED data, including approaches assuming simple and complex…
Descriptors: Hierarchical Linear Modeling, Synthesis, Data Analysis, Accuracy
Leite, Walter L.; Aydin, Burak; Gurel, Sungur – Journal of Experimental Education, 2019
This Monte Carlo simulation study compares methods to estimate the effects of programs with multiple versions when assignment of individuals to program version is not random. These methods use generalized propensity scores, which are predicted probabilities of receiving a particular level of the treatment conditional on covariates, to remove…
Descriptors: Probability, Weighted Scores, Monte Carlo Methods, Statistical Bias
Chang, Wanchen; Pituch, Keenan A. – Journal of Experimental Education, 2019
When data for multiple outcomes are collected in a multilevel design, researchers can select a univariate or multivariate analysis to examine group-mean differences. When correlated outcomes are incomplete, a multivariate multilevel model (MVMM) may provide greater power than univariate multilevel models (MLMs). For a two-group multilevel design…
Descriptors: Hierarchical Linear Modeling, Multivariate Analysis, Research Problems, Error of Measurement