Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 8 |
Descriptor
Error Patterns | 10 |
Monte Carlo Methods | 10 |
Sample Size | 10 |
Evaluation Methods | 6 |
Computation | 4 |
Simulation | 4 |
Correlation | 3 |
Effect Size | 3 |
Test Bias | 3 |
Test Items | 3 |
Test Validity | 3 |
More ▼ |
Source
Educational and Psychological… | 2 |
Journal of Experimental… | 2 |
Applied Measurement in… | 1 |
International Journal of… | 1 |
National Center for Education… | 1 |
ProQuest LLC | 1 |
Psychological Methods | 1 |
Author
Basman, Munevver | 1 |
Bolt, Daniel M. | 1 |
Chan, Daniel W.-L. | 1 |
Chan, Wai | 1 |
Chou, Tungshan | 1 |
Deke, John | 1 |
Garrett, Phyllis | 1 |
Huberty, Carl J. | 1 |
Kautz, Tim | 1 |
Kromrey, Jeffrey D. | 1 |
Murphy, Daniel L. | 1 |
More ▼ |
Publication Type
Journal Articles | 7 |
Reports - Research | 7 |
Reports - Evaluative | 2 |
Dissertations/Theses -… | 1 |
Numerical/Quantitative Data | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Basman, Munevver – International Journal of Assessment Tools in Education, 2023
To ensure the validity of the tests is to check that all items have similar results across different groups of individuals. However, differential item functioning (DIF) occurs when the results of individuals with equal ability levels from different groups differ from each other on the same test item. Based on Item Response Theory and Classic Test…
Descriptors: Test Bias, Test Items, Test Validity, Item Response Theory
Deke, John; Wei, Thomas; Kautz, Tim – National Center for Education Evaluation and Regional Assistance, 2017
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen (1988) characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts…
Descriptors: Intervention, Educational Research, Research Problems, Statistical Bias
Schoeneberger, Jason A. – Journal of Experimental Education, 2016
The design of research studies utilizing binary multilevel models must necessarily incorporate knowledge of multiple factors, including estimation method, variance component size, or number of predictors, in addition to sample sizes. This Monte Carlo study examined the performance of random effect binary outcome multilevel models under varying…
Descriptors: Sample Size, Models, Computation, Predictor Variables
Garrett, Phyllis – ProQuest LLC, 2009
The use of polytomous items in assessments has increased over the years, and as a result, the validity of these assessments has been a concern. Differential item functioning (DIF) and missing data are two factors that may adversely affect assessment validity. Both factors have been studied separately, but DIF and missing data are likely to occur…
Descriptors: Sample Size, Monte Carlo Methods, Test Validity, Effect Size
Murphy, Daniel L.; Pituch, Keenan A. – Journal of Experimental Education, 2009
The authors examined the robustness of multilevel linear growth curve modeling to misspecification of an autoregressive moving average process. As previous research has shown (J. Ferron, R. Dailey, & Q. Yi, 2002; O. Kwok, S. G. West, & S. B. Green, 2007; S. Sivo, X. Fan, & L. Witta, 2005), estimates of the fixed effects were unbiased, and Type I…
Descriptors: Sample Size, Computation, Evaluation Methods, Longitudinal Studies
Yoo, Jin Eun – Educational and Psychological Measurement, 2009
This Monte Carlo study investigates the beneficiary effect of including auxiliary variables during estimation of confirmatory factor analysis models with multiple imputation. Specifically, it examines the influence of sample size, missing rates, missingness mechanism combinations, missingness types (linear or convex), and the absence or presence…
Descriptors: Monte Carlo Methods, Research Methodology, Test Validity, Factor Analysis
Wells, Craig S.; Bolt, Daniel M. – Applied Measurement in Education, 2008
Tests of model misfit are often performed to validate the use of a particular model in item response theory. Douglas and Cohen (2001) introduced a general nonparametric approach for detecting misfit under the two-parameter logistic model. However, the statistical properties of their approach, and empirical comparisons to other methods, have not…
Descriptors: Test Length, Test Items, Monte Carlo Methods, Nonparametric Statistics
Kromrey, Jeffrey D.; Rendina-Gobioff, Gianna – Educational and Psychological Measurement, 2006
The performance of methods for detecting publication bias in meta-analysis was evaluated using Monte Carlo methods. Four methods of bias detection were investigated: Begg's rank correlation, Egger's regression, funnel plot regression, and trim and fill. Five factors were included in the simulation design: number of primary studies in each…
Descriptors: Comparative Analysis, Meta Analysis, Monte Carlo Methods, Correlation
Chan, Wai; Chan, Daniel W.-L. – Psychological Methods, 2004
The standard Pearson correlation coefficient is a biased estimator of the true population correlation, ?, when the predictor and the criterion are range restricted. To correct the bias, the correlation corrected for range restriction, r-sub(c), has been recommended, and a standard formula based on asymptotic results for estimating its standard…
Descriptors: Computation, Intervals, Sample Size, Monte Carlo Methods
Chou, Tungshan; Huberty, Carl J. – 1992
The empirical performance of the technique proposed by P. O. Johnson and J. Neyman (1936) (the JN technique) and the modification of R. F. Potthoff (1964) was studied in simulated data settings. The robustness of the two JN techniques was investigated with respect to their ability to control Type I and Type III errors. Factors manipulated in the…
Descriptors: Analysis of Variance, Computer Simulation, Equations (Mathematics), Error Patterns