NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)6
Since 2006 (last 20 years)15
Education Level
High Schools1
Audience
Researchers4
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 23 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Nordstokke, David W.; Colp, S. Mitchell – Practical Assessment, Research & Evaluation, 2018
Often, when testing for shift in location, researchers will utilize nonparametric statistical tests in place of their parametric counterparts when there is evidence or belief that the assumptions of the parametric test are not met (i.e., normally distributed dependent variables). An underlying and often unattended to assumption of nonparametric…
Descriptors: Nonparametric Statistics, Statistical Analysis, Monte Carlo Methods, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Porter, Kristin E. – Journal of Research on Educational Effectiveness, 2018
Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Gnambs, Timo; Staufenbiel, Thomas – Research Synthesis Methods, 2016
Two new methods for the meta-analysis of factor loadings are introduced and evaluated by Monte Carlo simulations. The direct method pools each factor loading individually, whereas the indirect method synthesizes correlation matrices reproduced from factor loadings. The results of the two simulations demonstrated that the accuracy of…
Descriptors: Accuracy, Meta Analysis, Factor Structure, Monte Carlo Methods
Porter, Kristin E. – Grantee Submission, 2017
Researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple testing procedures (MTPs) are statistical…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Porter, Kristin E. – MDRC, 2016
In education research and in many other fields, researchers are often interested in testing the effectiveness of an intervention on multiple outcomes, for multiple subgroups, at multiple points in time, or across multiple treatment groups. The resulting multiplicity of statistical hypothesis tests can lead to spurious findings of effects. Multiple…
Descriptors: Statistical Analysis, Program Effectiveness, Intervention, Hypothesis Testing
Spencer, Bryden – ProQuest LLC, 2016
Value-added models are a class of growth models used in education to assign responsibility for student growth to teachers or schools. For value-added models to be used fairly, sufficient statistical precision is necessary for accurate teacher classification. Previous research indicated precision below practical limits. An alternative approach has…
Descriptors: Monte Carlo Methods, Comparative Analysis, Accuracy, High Stakes Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Barr, Dale J.; Levy, Roger; Scheepers, Christoph; Tily, Harry J. – Journal of Memory and Language, 2013
Linear mixed-effects models (LMEMs) have become increasingly prominent in psycholinguistics and related areas. However, many researchers do not seem to appreciate how random effects structures affect the generalizability of an analysis. Here, we argue that researchers using LMEMs for confirmatory hypothesis testing should minimally adhere to the…
Descriptors: Hypothesis Testing, Psycholinguistics, Models, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Schoemann, Alexander M.; Miller, Patrick; Pornprasertmanit, Sunthud; Wu, Wei – International Journal of Behavioral Development, 2014
Planned missing data designs allow researchers to increase the amount and quality of data collected in a single study. Unfortunately, the effect of planned missing data designs on power is not straightforward. Under certain conditions using a planned missing design will increase power, whereas in other situations using a planned missing design…
Descriptors: Monte Carlo Methods, Simulation, Sample Size, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Jin, Ying; Myers, Nicholas D.; Ahn, Soyeon; Penfield, Randall D. – Educational and Psychological Measurement, 2013
The Rasch model, a member of a larger group of models within item response theory, is widely used in empirical studies. Detection of uniform differential item functioning (DIF) within the Rasch model typically employs null hypothesis testing with a concomitant consideration of effect size (e.g., signed area [SA]). Parametric equivalence between…
Descriptors: Test Bias, Effect Size, Item Response Theory, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Fan, Weihua; Hancock, Gregory R. – Journal of Educational and Behavioral Statistics, 2012
This study proposes robust means modeling (RMM) approaches for hypothesis testing of mean differences for between-subjects designs in order to control the biasing effects of nonnormality and variance inequality. Drawing from structural equation modeling (SEM), the RMM approaches make no assumption of variance homogeneity and employ robust…
Descriptors: Robustness (Statistics), Hypothesis Testing, Monte Carlo Methods, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Price, Larry R. – Structural Equation Modeling: A Multidisciplinary Journal, 2012
The aim of this study was to compare the small sample (N = 1, 3, 5, 10, 15) performance of a Bayesian multivariate vector autoregressive (BVAR-SEM) time series model relative to frequentist power and parameter estimation bias. A multivariate autoregressive model was developed based on correlated autoregressive time series vectors of varying…
Descriptors: Sample Size, Time, Bayesian Statistics, Structural Equation Models
Peer reviewed Peer reviewed
Direct linkDirect link
Cribbie, Robert A.; Arpin-Cribbie, Chantal A.; Gruman, Jamie A. – Journal of Experimental Education, 2009
Researchers in education are often interested in determining whether independent groups are equivalent on a specific outcome. Equivalence tests for 2 independent populations have been widely discussed, whereas testing for equivalence with more than 2 independent groups has received little attention. The authors discuss alternatives for testing the…
Descriptors: Monte Carlo Methods, Testing, Statistical Analysis, Researchers
Peer reviewed Peer reviewed
Direct linkDirect link
Monahan, Patrick O.; Stump, Timothy E.; Finch, Holmes; Hambleton, Ronald K. – Applied Psychological Measurement, 2007
DETECT is a nonparametric "full" dimensionality assessment procedure that clusters dichotomously scored items into dimensions and provides a DETECT index of magnitude of multidimensionality. Four factors (test length, sample size, item response theory [IRT] model, and DETECT index) were manipulated in a Monte Carlo study of bias, standard error,…
Descriptors: Test Length, Sample Size, Monte Carlo Methods, Geometric Concepts
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Fang Fang – Structural Equation Modeling: A Multidisciplinary Journal, 2007
Two Monte Carlo studies were conducted to examine the sensitivity of goodness of fit indexes to lack of measurement invariance at 3 commonly tested levels: factor loadings, intercepts, and residual variances. Standardized root mean square residual (SRMR) appears to be more sensitive to lack of invariance in factor loadings than in intercepts or…
Descriptors: Geometric Concepts, Sample Size, Monte Carlo Methods, Goodness of Fit
Peer reviewed Peer reviewed
Quintana, Stephen M.; Maxwell, Scott E. – Journal of Educational Statistics, 1994
Seven univariate procedures for testing omnibus null hypotheses for data gathered from repeated measures designs were evaluated, comparing five alternative approaches with two more traditional procedures. Results suggest that the alternatives are improvements. The most effective alternate procedure in controlling Type I error rates is discussed.…
Descriptors: Comparative Analysis, Hypothesis Testing, Monte Carlo Methods, Research Methodology
Previous Page | Next Page ยป
Pages: 1  |  2