NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Hong, Sanghyun; Reed, W. Robert – Research Synthesis Methods, 2021
The purpose of this study is to show how Monte Carlo analysis of meta-analytic estimators can be used to select estimators for specific research situations. Our analysis conducts 1620 individual experiments, where each experiment is defined by a unique combination of sample size, effect size, effect size heterogeneity, publication selection…
Descriptors: Monte Carlo Methods, Meta Analysis, Research Methodology, Experiments
Peer reviewed Peer reviewed
Direct linkDirect link
Dodge, Nadine; Chapman, Ralph – International Journal of Social Research Methodology, 2018
Electronically assisted survey techniques offer several advantages over traditional survey techniques. However, they can also potentially introduce biases, such as coverage biases and measurement error. The current study compares the relative merits of two survey distribution and completion modes: email recruitment with internet completion; and…
Descriptors: Online Surveys, Handheld Devices, Bias, Electronic Mail
Peer reviewed Peer reviewed
Direct linkDirect link
Davis, Alexander L.; Fischhoff, Baruch – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2014
Four experiments examined when laypeople attribute unexpected experimental outcomes to error, in foresight and in hindsight, along with their judgments of whether the data should be published. Participants read vignettes describing hypothetical experiments, along with the result of the initial observation, considered as either a possibility…
Descriptors: Evidence, Vignettes, Error Patterns, Error of Measurement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Citkowicz, Martyna; Polanin, Joshua R. – Society for Research on Educational Effectiveness, 2014
Meta-analyses are syntheses of effect-size estimates obtained from a collection of studies to summarize a particular field or topic (Hedges, 1992; Lipsey & Wilson, 2001). These reviews are used to integrate knowledge that can inform both scientific inquiry and public policy, therefore it is important to ensure that the estimates of the effect…
Descriptors: Meta Analysis, Accountability, Cluster Grouping, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Bernard, Robert M.; Borokhovski, Eugene; Schmid, Richard F.; Tamim, Rana M. – Journal of Computing in Higher Education, 2014
This article contains a second-order meta-analysis and an exploration of bias in the technology integration literature in higher education. Thirteen meta-analyses, dated from 2000 to 2014 were selected to be included based on the questions asked and the presence of adequate statistical information to conduct a quantitative synthesis. The weighted…
Descriptors: Meta Analysis, Bias, Technology Integration, Higher Education
Peer reviewed Peer reviewed
Direct linkDirect link
Hoyt, William T. – Journal of Counseling Psychology, 2002
Rater bias has long been considered a source of error in observer ratings but has been ignored by process researchers using participant ratings. In particular, rater variance, or differences in generalized favorable or unfavorable perceptions of others, represents a neglected source of error in studies using participant ratings. The author…
Descriptors: Psychotherapy, Generalizability Theory, Research Methodology, Error of Measurement
Bernstein, Lawrence; Burstein, Nancy – 1994
The inherent methodological problem in conducting research at multiple sites is how to best derive an overall estimate of program impact across multiple sites, best being the estimate that minimizes the mean square error, that is, the square of the difference between the observed and true values. An empirical example illustrates the use of the…
Descriptors: Bias, Comprehensive Programs, Data Analysis, Data Collection