NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Location
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 12 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Warne, Russell T. – Journal of Advanced Academics, 2022
Recently, Picho-Kiroga (2021) published a meta-analysis on the effect of stereotype threat on females. Their conclusion was that the average effect size for stereotype threat studies was d = .28, but that effects are overstated because the majority of studies on stereotype threat in females include methodological characteristics that inflate the…
Descriptors: Sex Stereotypes, Females, Meta Analysis, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Devlieger, Ines; Mayer, Axel; Rosseel, Yves – Educational and Psychological Measurement, 2016
In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and…
Descriptors: Regression (Statistics), Comparative Analysis, Structural Equation Models, Monte Carlo Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lockwood, J. R.; Castellano, Katherine E. – Grantee Submission, 2015
This article suggests two alternative statistical approaches for estimating student growth percentiles (SGP). The first is to estimate percentile ranks of current test scores conditional on past test scores directly, by modeling the conditional cumulative distribution functions, rather than indirectly through quantile regressions. This would…
Descriptors: Statistical Analysis, Achievement Gains, Academic Achievement, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Schoeneberger, Jason A. – Journal of Experimental Education, 2016
The design of research studies utilizing binary multilevel models must necessarily incorporate knowledge of multiple factors, including estimation method, variance component size, or number of predictors, in addition to sample sizes. This Monte Carlo study examined the performance of random effect binary outcome multilevel models under varying…
Descriptors: Sample Size, Models, Computation, Predictor Variables
Peer reviewed Peer reviewed
Direct linkDirect link
Dong, Nianbo – American Journal of Evaluation, 2015
Researchers have become increasingly interested in programs' main and interaction effects of two variables (A and B, e.g., two treatment variables or one treatment variable and one moderator) on outcomes. A challenge for estimating main and interaction effects is to eliminate selection bias across A-by-B groups. I introduce Rubin's causal model to…
Descriptors: Probability, Statistical Analysis, Research Design, Causal Models
Peer reviewed Peer reviewed
Direct linkDirect link
Bernard, Robert M.; Borokhovski, Eugene; Schmid, Richard F.; Tamim, Rana M. – Journal of Computing in Higher Education, 2014
This article contains a second-order meta-analysis and an exploration of bias in the technology integration literature in higher education. Thirteen meta-analyses, dated from 2000 to 2014 were selected to be included based on the questions asked and the presence of adequate statistical information to conduct a quantitative synthesis. The weighted…
Descriptors: Meta Analysis, Bias, Technology Integration, Higher Education
Peer reviewed Peer reviewed
Direct linkDirect link
Pullin, Andrew S.; Knight, Teri M. – New Directions for Evaluation, 2009
To use environmental program evaluation to increase effectiveness, predictive power, and resource allocation efficiency, evaluators need good data. Data require sufficient credibility in terms of fitness for purpose and quality to develop the necessary evidence base. The authors examine elements of data credibility using experience from critical…
Descriptors: Data, Credibility, Conservation (Environment), Program Evaluation
Liu, Jinghua; Sinharay, Sandip; Holland, Paul W.; Feigenbaum, Miriam; Curley, Edward – Educational Testing Service, 2009
This study explores the use of a different type of anchor, a "midi anchor", that has a smaller spread of item difficulties than the tests to be equated, and then contrasts its use with the use of a "mini anchor". The impact of different anchors on observed score equating were evaluated and compared with respect to systematic…
Descriptors: Equated Scores, Test Items, Difficulty Level, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Haberman, Shelby J. – Psychometrika, 2006
When a simple random sample of size n is employed to establish a classification rule for prediction of a polytomous variable by an independent variable, the best achievable rate of misclassification is higher than the corresponding best achievable rate if the conditional probability distribution is known for the predicted variable given the…
Descriptors: Bias, Computation, Sample Size, Classification
Peer reviewed Peer reviewed
Neumark, David – Economics of Education Review, 1999
Recent within-twin estimates of schooling returns are considerably higher than existing estimates. This paper shows that small ability differences among twins can yield more upward omitted-ability bias (and more upward bias overall) in the instrumental variables estimate correcting for measurement error than in the standard within-twin estimate.…
Descriptors: Bias, Econometrics, Education Work Relationship, Elementary Secondary Education
Bernstein, Lawrence; Burstein, Nancy – 1994
The inherent methodological problem in conducting research at multiple sites is how to best derive an overall estimate of program impact across multiple sites, best being the estimate that minimizes the mean square error, that is, the square of the difference between the observed and true values. An empirical example illustrates the use of the…
Descriptors: Bias, Comprehensive Programs, Data Analysis, Data Collection
Gardner, Eric F. – NCME Measurement in Education, 1978
It is suggested that bias--when associated with a predictor, a test, or a statistical estimator--is not always bad, in spite of the immediate negative response evoked by the word, bias. Four settings are described to illustrate situations in which a procedure should not be summarily rejected due to bias: (1) educational researchers rejected the…
Descriptors: Achievement Tests, Bias, Competitive Selection, Emotional Response