NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Journal of Experimental…69
Audience
Researchers1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 69 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Hsin-Yun Lee; You-Lin Chen; Li-Jen Weng – Journal of Experimental Education, 2024
The second version of Kaiser's Measure of Sampling Adequacy (MSA[subscript 2]) has been widely applied to assess the factorability of data in psychological research. The MSA[subscript 2] is developed in the population and little is known about its behavior in finite samples. If estimated MSA[subscript 2]s are biased due to sampling errors,…
Descriptors: Error of Measurement, Reliability, Sampling, Statistical Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Ting Dai; Yang Du; Jennifer Cromley; Tia Fechter; Frank Nelson – Journal of Experimental Education, 2024
Simple matrix sampling planned missing (SMS PD) design, introduce missing data patterns that lead to covariances between variables that are not jointly observed, and create difficulties for analyses other than mean and variance estimations. Based on prior research, we adopted a new multigroup confirmatory factor analysis (CFA) approach to handle…
Descriptors: Research Problems, Research Design, Data, Matrices
Peer reviewed Peer reviewed
Direct linkDirect link
Vispoel, Walter P.; Lee, Hyeryung; Xu, Guanlan; Hong, Hyeri – Journal of Experimental Education, 2023
Although generalizability theory (GT) designs have traditionally been analyzed within an ANOVA framework, identical results can be obtained with structural equation models (SEMs) but extended to represent multiple sources of both systematic and measurement error variance, include estimation methods less likely to produce negative variance…
Descriptors: Generalizability Theory, Structural Equation Models, Programming Languages, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Jamshidi, Laleh; Declercq, Lies; Fernández-Castilla, Belén; Ferron, John M.; Moeyaert, Mariola; Beretvas, S. Natasha; Van den Noortgate, Wim – Journal of Experimental Education, 2021
Previous research found bias in the estimate of the overall fixed effects and variance components using multilevel meta-analyses of standardized single-case data. Therefore, we evaluate two adjustments in an attempt to reduce the bias and improve the statistical properties of the parameter estimates. The results confirm the existence of bias when…
Descriptors: Statistical Bias, Multivariate Analysis, Meta Analysis, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Fernández-Castilla, Belén; Declercq, Lies; Jamshidi, Laleh; Beretvas, S. Natasha; Onghena, Patrick; Van den Noortgate, Wim – Journal of Experimental Education, 2021
This study explores the performance of classical methods for detecting publication bias--namely, Egger's regression test, Funnel Plot test, Begg's Rank Correlation and Trim and Fill method--in meta-analysis of studies that report multiple effects. Publication bias, outcome reporting bias, and a combination of these were generated. Egger's…
Descriptors: Statistical Bias, Meta Analysis, Publications, Regression (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Weiss, Brandi A.; Dardick, William – Journal of Experimental Education, 2021
Classification measures and entropy variants can be used as indicators of model fit for logistic regression. These measures rely on a cut-point, "c," to determine predicted group membership. While recommendations exist for determining the location of the cut-point, these methods are primarily anecdotal. The current study used Monte Carlo…
Descriptors: Cutting Scores, Regression (Statistics), Classification, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Zhonghua – Journal of Experimental Education, 2022
Reporting standard errors of equating has been advocated as a standard practice when conducting test equating. The two most widely applied procedures for standard errors of equating including the bootstrap method and the delta method are either computationally intensive or confined to the derivations of complicated formulas. In the current study,…
Descriptors: Error of Measurement, Item Response Theory, True Scores, Equated Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Weiss, Brandi A.; Dardick, William – Journal of Experimental Education, 2020
Researchers are often reluctant to rely on classification rates because a model with favorable classification rates but poor separation may not replicate well. In comparison, entropy captures information about borderline cases unlikely to generalize to the population. In logistic regression, the correctness of predicted group membership is known,…
Descriptors: Classification, Regression (Statistics), Goodness of Fit, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Yixing; Thompson, Marilyn S. – Journal of Experimental Education, 2022
A simulation study was conducted to explore the impact of differential item functioning (DIF) on general factor difference estimation for bifactor, ordinal data. Common analysis misspecifications in which the generated bifactor data with DIF were fitted using models with equality constraints on noninvariant item parameters were compared under data…
Descriptors: Comparative Analysis, Item Analysis, Sample Size, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Baek, Eunkyeng; Luo, Wen; Henri, Maria – Journal of Experimental Education, 2022
It is common to include multiple dependent variables (DVs) in single-case experimental design (SCED) meta-analyses. However, statistical issues associated with multiple DVs in the multilevel modeling approach (i.e., possible dependency of error, heterogeneous treatment effects, and heterogeneous error structures) have not been fully investigated.…
Descriptors: Meta Analysis, Hierarchical Linear Modeling, Comparative Analysis, Statistical Inference
Peer reviewed Peer reviewed
Direct linkDirect link
Jia, Yuane; Konold, Timothy – Journal of Experimental Education, 2021
Traditional observed variable multilevel models for evaluating indirect effects are limited by their inability to quantify measurement and sampling error. They are further restricted by being unable to fully separate within- and between-level effects without bias. Doubly latent models reduce these biases by decomposing the observed within-level…
Descriptors: Hierarchical Linear Modeling, Educational Environment, Aggression, Bullying
Peer reviewed Peer reviewed
Direct linkDirect link
Nazari, Sanaz; Leite, Walter L.; Huggins-Manley, A. Corinne – Journal of Experimental Education, 2023
The piecewise latent growth models (PWLGMs) can be used to study changes in the growth trajectory of an outcome due to an event or condition, such as exposure to an intervention. When there are multiple outcomes of interest, a researcher may choose to fit a series of PWLGMs or a single parallel-process PWLGM. A comparison of these models is…
Descriptors: Growth Models, Statistical Analysis, Intervention, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Joo, Seang-Hwane; Ferron, John M.; Moeyaert, Mariola; Beretvas, S. Natasha; Van den Noortgate, Wim – Journal of Experimental Education, 2019
Multilevel modeling has been utilized for combining single-case experimental design (SCED) data assuming simple level-1 error structures. The purpose of this study is to compare various multilevel analysis approaches for handling potential complexity in the level-1 error structure within SCED data, including approaches assuming simple and complex…
Descriptors: Hierarchical Linear Modeling, Synthesis, Data Analysis, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Leite, Walter L.; Aydin, Burak; Gurel, Sungur – Journal of Experimental Education, 2019
This Monte Carlo simulation study compares methods to estimate the effects of programs with multiple versions when assignment of individuals to program version is not random. These methods use generalized propensity scores, which are predicted probabilities of receiving a particular level of the treatment conditional on covariates, to remove…
Descriptors: Probability, Weighted Scores, Monte Carlo Methods, Statistical Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Chang, Wanchen; Pituch, Keenan A. – Journal of Experimental Education, 2019
When data for multiple outcomes are collected in a multilevel design, researchers can select a univariate or multivariate analysis to examine group-mean differences. When correlated outcomes are incomplete, a multivariate multilevel model (MVMM) may provide greater power than univariate multilevel models (MLMs). For a two-group multilevel design…
Descriptors: Hierarchical Linear Modeling, Multivariate Analysis, Research Problems, Error of Measurement
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5