NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 9 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
van Aert, Robbie C. M. – Research Synthesis Methods, 2023
The partial correlation coefficient (PCC) is used to quantify the linear relationship between two variables while taking into account/controlling for other variables. Researchers frequently synthesize PCCs in a meta-analysis, but two of the assumptions of the common equal-effect and random-effects meta-analysis model are by definition violated.…
Descriptors: Correlation, Meta Analysis, Sampling, Simulation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lu, Ru; Guo, Hongwen; Dorans, Neil J. – ETS Research Report Series, 2021
Two families of analysis methods can be used for differential item functioning (DIF) analysis. One family is DIF analysis based on observed scores, such as the Mantel-Haenszel (MH) and the standardized proportion-correct metric for DIF procedures; the other is analysis based on latent ability, in which the statistic is a measure of departure from…
Descriptors: Robustness (Statistics), Weighted Scores, Test Items, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Yan; Kim, Eun Sook; Nguyen, Diep Thi; Pham, Thanh Vinh; Chen, Yi-Hsin; Yi, Zhiyao – AERA Online Paper Repository, 2017
The analysis of variance (ANOVA) F test is a commonly used method to test the mean equality among two or more populations. A critical assumption of ANOVA is homogeneity of variance (HOV), that is, the compared groups have equal variances. Although it is encouraged to test HOV as part of the regular ANOVA procedure, the efficacy of the initial HOV…
Descriptors: Statistical Analysis, Error of Measurement, Robustness (Statistics), Sampling
Peer reviewed Peer reviewed
Direct linkDirect link
Cooper, Barry; Glaesser, Judith – International Journal of Social Research Methodology, 2016
Ragin's Qualitative Comparative Analysis (QCA) is often used with small to medium samples where the researcher has good case knowledge. Employing it to analyse large survey datasets, without in-depth case knowledge, raises new challenges. We present ways of addressing these challenges. We first report a single QCA result from a configurational…
Descriptors: Social Science Research, Robustness (Statistics), Educational Sociology, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Menil, Violeta C.; Ye, Ruili – MathAMATYC Educator, 2012
This study serves as a teaching aid for teachers of introductory statistics. The aim of this study was limited to determining various sample sizes when estimating population proportion. Tables on sample sizes were generated using a C[superscript ++] program, which depends on population size, degree of precision or error level, and confidence…
Descriptors: Sample Size, Probability, Statistics, Sampling
Peer reviewed Peer reviewed
Barchard, Kimberly A.; Hakstian, A. Ralph – Multivariate Behavioral Research, 1997
Two studies, both using Type 12 sampling, are presented in which the effects of violating the assumption of essential parallelism in setting confidence intervals are studied. Results indicate that as long as data manifest properties of essential parallelism, the two methods studied maintain precise Type I error control. (SLD)
Descriptors: Error of Measurement, Robustness (Statistics), Sampling, Statistical Analysis
Peer reviewed Peer reviewed
Wilcox, Rand R. – Journal of Educational Statistics, 1990
Recently, C. E. McCulloch (1987) suggested a modification of the Morgan-Pitman test for comparing the variances of two dependent groups. This paper demonstrates that there are situations where the procedure is not robust. A subsample approach, similar to the Box-Scheffe test, and the Sandvik-Olsson procedure are also assessed. (TJH)
Descriptors: Comparative Analysis, Equations (Mathematics), Error of Measurement, Mathematical Models
Longford, Nicholas T. – 1992
Large scale surveys usually employ a complex sampling design and as a consequence, no standard methods for estimation of the standard errors associated with the estimates of population means are available. Resampling methods, such as jackknife or bootstrap, are often used, with reference to their properties of robustness and reduction of bias. A…
Descriptors: Error of Measurement, Estimation (Mathematics), Prediction, Research Design
Peer reviewed Peer reviewed
Freedman, David A.; And Others – Evaluation Review, 1993
Techniques for adjusting census figures are discussed, with a focus on sampling error, uncertainty of estimates resulting from the luck of sample choice. Computer simulations illustrate the ways in which the smoothing algorithm may make adjustments less, rather than more, accurate. (SLD)
Descriptors: Algorithms, Census Figures, Computer Simulation, Error of Measurement