NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 11 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Lingbo Tong; Wen Qu; Zhiyong Zhang – Grantee Submission, 2025
Factor analysis is widely utilized to identify latent factors underlying the observed variables. This paper presents a comprehensive comparative study of two widely used methods for determining the optimal number of factors in factor analysis, the K1 rule, and parallel analysis, along with a more recently developed method, the bass-ackward method.…
Descriptors: Factor Analysis, Monte Carlo Methods, Statistical Analysis, Sample Size
Bonifay, Wes – Grantee Submission, 2022
Traditional statistical model evaluation typically relies on goodness-of-fit testing and quantifying model complexity by counting parameters. Both of these practices may result in overfitting and have thereby contributed to the generalizability crisis. The information-theoretic principle of minimum description length addresses both of these…
Descriptors: Statistical Analysis, Models, Goodness of Fit, Evaluation Methods
Jacob M. Schauer; Kaitlyn G. Fitzgerald; Sarah Peko-Spicer; Mena C. R. Whalen; Rrita Zejnullahi; Larry V. Hedges – Grantee Submission, 2021
Several programs of research have sought to assess the replicability of scientific findings in different fields, including economics and psychology. These programs attempt to replicate several findings and use the results to say something about large-scale patterns of replicability in a field. However, little work has been done to understand the…
Descriptors: Statistical Analysis, Research Methodology, Evaluation Methods, Replication (Evaluation)
Ke, Zijun; Zhang, Zhiyong – Grantee Submission, 2018
Autocorrelation and partial autocorrelation, which provide a mathematical tool to understand repeating patterns in time series data, are often used to facilitate the identification of model orders of time series models (e.g., moving average and autoregressive models). Asymptotic methods for testing autocorrelation and partial autocorrelation such…
Descriptors: Correlation, Mathematical Formulas, Sampling, Monte Carlo Methods
Zimmerman, Kathleen N.; Pustejovsky, James E.; Ledford, Jennifer R.; Barton, Erin E.; Severini, Katherine E.; Lloyd, Blair P. – Grantee Submission, 2018
Varying methods for evaluating the outcomes of single case research designs (SCD) are currently used in reviews and meta-analyses of interventions. Quantitative effect size measures are often presented alongside visual analysis conclusions. Six measures across two classes--overlap measures (percentage non-overlapping data, improvement rate…
Descriptors: Research Design, Evaluation Methods, Synthesis, Intervention
Middleton, Joel A.; Scott, Marc A.; Diakow, Ronli; Hill, Jennifer L. – Grantee Submission, 2016
In the analysis of causal effects in non-experimental studies, conditioning on observable covariates is one way to try to reduce unobserved confounder bias. However, a developing literature has shown that conditioning on certain covariates may increase bias, and the mechanisms underlying this phenomenon have not been fully explored. We add to the…
Descriptors: Statistical Bias, Identification, Evaluation Methods, Measurement Techniques
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Herrmann-Abell, Cari F.; Hardcastle, Joseph; DeBoer, George E. – Grantee Submission, 2018
We compared students' performance on a paper-based test (PBT) and three computer-based tests (CBTs). The three computer-based tests used different test navigation and answer selection features, allowing us to examine how these features affect student performance. The study sample consisted of 9,698 fourth through twelfth grade students from across…
Descriptors: Evaluation Methods, Tests, Computer Assisted Testing, Scores
Porter, Kristin E. – Grantee Submission, 2017
Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Doroudi, Shayan; Aleven, Vincent; Brunskill, Emma – Grantee Submission, 2017
The gold standard for identifying more effective pedagogical approaches is to perform an experiment. Unfortunately, frequently a hypothesized alternate way of teaching does not yield an improved effect. Given the expense and logistics of each experiment, and the enormous space of potential ways to improve teaching, it would be highly preferable if…
Descriptors: Teaching Methods, Matrices, Evaluation Methods, Models
Yuan, Ke-Hai; Zhang, Zhiyong; Zhao, Yanyun – Grantee Submission, 2017
The normal-distribution-based likelihood ratio statistic T[subscript ml] = nF[subscript ml] is widely used for power analysis in structural Equation modeling (SEM). In such an analysis, power and sample size are computed by assuming that T[subscript ml] follows a central chi-square distribution under H[subscript 0] and a noncentral chi-square…
Descriptors: Statistical Analysis, Evaluation Methods, Structural Equation Models, Reliability
Gordon, Rachel A.; Hofer, Kerry G.; Fujimoto, Ken A.; Risk, Nicole; Kaestner, Robert; Korenman, Sanders – Grantee Submission, 2015
Research Findings: The Early Childhood Environment Rating Scale, Revised (ECERS-R) is widely used, often to evaluate whether preschool programs are of sufficient quality to improve children's school readiness. We examined the validity of the measure for this purpose. Item response theory (IRT) analyses revealed that many items did not fit together…
Descriptors: Educational Quality, Preschool Education, Item Response Theory, School Readiness