Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 9 |
Descriptor
Error of Measurement | 10 |
Intervals | 10 |
Monte Carlo Methods | 10 |
Computation | 5 |
Effect Size | 4 |
Statistical Analysis | 3 |
Evaluation Methods | 2 |
Meta Analysis | 2 |
Statistical Bias | 2 |
Autism | 1 |
Bayesian Statistics | 1 |
More ▼ |
Source
Multivariate Behavioral… | 2 |
Psychological Methods | 2 |
Research Synthesis Methods | 2 |
Autism: The International… | 1 |
Educational and Psychological… | 1 |
Grantee Submission | 1 |
Pedagogical Research | 1 |
Author
Publication Type
Journal Articles | 9 |
Reports - Research | 7 |
Reports - Descriptive | 3 |
Education Level
Higher Education | 1 |
Audience
Location
South Korea | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Jiang, Zhehan; Raymond, Mark; DiStefano, Christine; Shi, Dexin; Liu, Ren; Sun, Junhua – Educational and Psychological Measurement, 2022
Computing confidence intervals around generalizability coefficients has long been a challenging task in generalizability theory. This is a serious practical problem because generalizability coefficients are often computed from designs where some facets have small sample sizes, and researchers have little guide regarding the trustworthiness of the…
Descriptors: Monte Carlo Methods, Intervals, Generalizability Theory, Error of Measurement
Dan Soriano; Eli Ben-Michael; Peter Bickel; Avi Feller; Samuel D. Pimentel – Grantee Submission, 2023
Assessing sensitivity to unmeasured confounding is an important step in observational studies, which typically estimate effects under the assumption that all confounders are measured. In this paper, we develop a sensitivity analysis framework for balancing weights estimators, an increasingly popular approach that solves an optimization problem to…
Descriptors: Statistical Analysis, Computation, Mathematical Formulas, Monte Carlo Methods
Koçak, Duygu – Pedagogical Research, 2020
Iteration number in Monte Carlo simulation method used commonly in educational research has an effect on Item Response Theory test and item parameters. The related studies show that the number of iteration is at the discretion of the researcher. Similarly, there is no specific number suggested for the number of iteration in the related literature.…
Descriptors: Monte Carlo Methods, Item Response Theory, Educational Research, Test Items
Rubio-Aparicio, María; López-López, José Antonio; Sánchez-Meca, Julio; Marín-Martínez, Fulgencio; Viechtbauer, Wolfgang; Van den Noortgate, Wim – Research Synthesis Methods, 2018
The random-effects model, applied in most meta-analyses nowadays, typically assumes normality of the distribution of the effect parameters. The purpose of this study was to examine the performance of various random-effects methods (standard method, Hartung's method, profile likelihood method, and bootstrapping) for computing an average effect size…
Descriptors: Effect Size, Meta Analysis, Intervals, Monte Carlo Methods
López-López, José Antonio; Van den Noortgate, Wim; Tanner-Smith, Emily E.; Wilson, Sandra Jo; Lipsey, Mark W. – Research Synthesis Methods, 2017
Dependent effect sizes are ubiquitous in meta-analysis. Using Monte Carlo simulation, we compared the performance of 2 methods for meta-regression with dependent effect sizes--robust variance estimation (RVE) and 3-level modeling--with the standard meta-analytic method for independent effect sizes. We further compared bias-reduced linearization…
Descriptors: Effect Size, Regression (Statistics), Meta Analysis, Comparative Analysis
Pantelis, Peter C.; Kennedy, Daniel P. – Autism: The International Journal of Research and Practice, 2016
Two-phase designs in epidemiological studies of autism prevalence introduce methodological complications that can severely limit the precision of resulting estimates. If the assumptions used to derive the prevalence estimate are invalid or if the uncertainty surrounding these assumptions is not properly accounted for in the statistical inference…
Descriptors: Foreign Countries, Pervasive Developmental Disorders, Autism, Incidence
Lee, Chun-Ting; Zhang, Guangjian; Edwards, Michael C. – Multivariate Behavioral Research, 2012
Exploratory factor analysis (EFA) is often conducted with ordinal data (e.g., items with 5-point responses) in the social and behavioral sciences. These ordinal variables are often treated as if they were continuous in practice. An alternative strategy is to assume that a normally distributed continuous variable underlies each ordinal variable.…
Descriptors: Personality Traits, Intervals, Monte Carlo Methods, Factor Analysis
Biesanz, Jeremy C.; Falk, Carl F.; Savalei, Victoria – Multivariate Behavioral Research, 2010
Theoretical models specifying indirect or mediated effects are common in the social sciences. An indirect effect exists when an independent variable's influence on the dependent variable is mediated through an intervening variable. Classic approaches to assessing such mediational hypotheses (Baron & Kenny, 1986; Sobel, 1982) have in recent years…
Descriptors: Computation, Intervals, Models, Monte Carlo Methods
Bonnett, Douglas G. – Psychological Methods, 2008
Most psychology journals now require authors to report a sample value of effect size along with hypothesis testing results. The sample effect size value can be misleading because it contains sampling error. Authors often incorrectly interpret the sample effect size as if it were the population effect size. A simple solution to this problem is to…
Descriptors: Intervals, Hypothesis Testing, Effect Size, Sampling
Charles, Eric P. – Psychological Methods, 2005
The correction for attenuation due to measurement error (CAME) has received many historical criticisms, most of which can be traced to the limited ability to use CAME inferentially. Past attempts to determine confidence intervals for CAME are summarized and their limitations discussed. The author suggests that inference requires confidence sets…
Descriptors: Error of Measurement, Error Correction, Intervals, Inferences