NotesFAQContact Us
Collection
Advanced
Search Tips
Location
Pakistan1
Assessments and Surveys
Schools and Staffing Survey…2
What Works Clearinghouse Rating
Showing 1 to 15 of 22 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Angela Johnson; Elizabeth Barker; Marcos Viveros Cespedes – Educational Measurement: Issues and Practice, 2024
Educators and researchers strive to build policies and practices on data and evidence, especially on academic achievement scores. When assessment scores are inaccurate for specific student populations or when scores are inappropriately used, even data-driven decisions will be misinformed. To maximize the impact of the research-practice-policy…
Descriptors: Equal Education, Inclusion, Evaluation Methods, Error of Measurement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2020
This supplement concerns Appendix E of the "What Works Clearinghouse (WWC) Procedures Handbook, Version 4.1." The supplement extends the range of designs and analyses that can generate effect size and standard error estimates for the WWC. This supplement presents several new standard error formulas for cluster-level assignment studies,…
Descriptors: Educational Research, Evaluation Methods, Effect Size, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Porter, Kristin E.; Reardon, Sean F.; Unlu, Fatih; Bloom, Howard S.; Cimpian, Joseph R. – Journal of Research on Educational Effectiveness, 2017
A valuable extension of the single-rating regression discontinuity design (RDD) is a multiple-rating RDD (MRRDD). To date, four main methods have been used to estimate average treatment effects at the multiple treatment frontiers of an MRRDD: the "surface" method, the "frontier" method, the "binding-score" method, and…
Descriptors: Regression (Statistics), Intervention, Quasiexperimental Design, Simulation
Westlund, Erik; Stuart, Elizabeth A. – American Journal of Evaluation, 2017
This article discusses the nonuse, misuse, and proper use of pilot studies in experimental evaluation research. The authors first show that there is little theoretical, practical, or empirical guidance available to researchers who seek to incorporate pilot studies into experimental evaluation research designs. The authors then discuss how pilot…
Descriptors: Use Studies, Pilot Projects, Evaluation Research, Experiments
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Citkowicz, Martyna; Hedges, Larry V. – Society for Research on Educational Effectiveness, 2013
In some instances, intentionally or not, study designs are such that there is clustering in one group but not in the other. This paper describes methods for computing effect size estimates and their variances when there is clustering in only one group and the analysis has not taken that clustering into account. The authors provide the effect size…
Descriptors: Multivariate Analysis, Effect Size, Sampling, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Phillips, Gary W. – Applied Measurement in Education, 2015
This article proposes that sampling design effects have potentially huge unrecognized impacts on the results reported by large-scale district and state assessments in the United States. When design effects are unrecognized and unaccounted for they lead to underestimating the sampling error in item and test statistics. Underestimating the sampling…
Descriptors: State Programs, Sampling, Research Design, Error of Measurement
Zajonc, Tristan – ProQuest LLC, 2012
Effective policymaking requires understanding the causal effects of competing proposals. Relevant causal quantities include proposals' expected effect on different groups of recipients, the impact of policies over time, the potential trade-offs between competing objectives, and, ultimately, the optimal policy. This dissertation studies causal…
Descriptors: Public Policy, Policy Formation, Bayesian Statistics, Economic Development
Peer reviewed Peer reviewed
Direct linkDirect link
Gugiu, P. Cristian – Journal of MultiDisciplinary Evaluation, 2007
The constraints of conducting evaluations in real-world settings often necessitate the implementation of less than ideal designs. Unfortunately, the standard method for estimating the precision of a result (i.e., confidence intervals [CI]) cannot be used for evaluative conclusions that are derived from multiple indicators, measures, and data…
Descriptors: Measurement, Evaluation Methods, Evaluation Problems, Error of Measurement
Peer reviewed Peer reviewed
Goldstein, Harvey A.; Cruze, Alvin M. – Monthly Labor Review, 1987
The article summarizes the results of an evaluation of the accuracy of statewide industry and occupational employment projections for 20 states. The authors provide some recommendations, based on evaluation results, to improve subsequent rounds of statewide projections. (CH)
Descriptors: Employment Patterns, Employment Projections, Error of Measurement, Evaluation Methods
Peer reviewed Peer reviewed
Allison, David B.; And Others – Journal of Experimental Education, 1992
Effects of response guided experimentation in applied behavior analysis on Type I error rates are explored. Data from T. A. Matyas and K. M. Greenwood (1990) suggest that, when visual inspection is combined with response guided experimentation, Type I error rates can be as high as 25%. (SLD)
Descriptors: Behavioral Science Research, Error of Measurement, Evaluation Methods, Experiments
Peer reviewed Peer reviewed
Loo, Robert – Perceptual and Motor Skills, 1983
In examining considerations in determining sample sizes for factor analyses, attention was given to the effects of outliers; the standard error of correlations, and their effect on factor structure; sample heterogeneity; and the misuse of rules of thumb for sample sizes. (Author)
Descriptors: Correlation, Error of Measurement, Evaluation Methods, Factor Analysis
Conley, David T. – Educational Policy Improvement Center (NJ1), 2007
The AP Course Audit utilizes a criterion-based professional judgment method of analysis within a nested multi-step review process. The overall goal of the methodology is to yield a final judgment on each syllabus that is ultimately valid. While reviewer consistency is an important consideration, the most important goal is to reach a final judgment…
Descriptors: Academic Achievement, Compliance (Legal), Course Descriptions, Course Content
Olejnik, Stephen F.; Algina, James – 1986
Sampling distributions for ten tests for comparing population variances in a two group design were generated for several combinations of equal and unequal sample sizes, population means, and group variances when distributional forms differed. The ten procedures included: (1) O'Brien's (OB); (2) O'Brien's with adjusted degrees of freedom; (3)…
Descriptors: Error of Measurement, Evaluation Methods, Measurement Techniques, Nonparametric Statistics
Peer reviewed Peer reviewed
Corder-Bolz, Charles R. – Educational and Psychological Measurement, 1978
Six models for evaluating change are examined via a Monte Carlo study. All six models show a lack of power. A modified analysis of variance procedure is suggested as an alternative. (JKS)
Descriptors: Analysis of Covariance, Analysis of Variance, Educational Change, Error of Measurement
Peer reviewed Peer reviewed
Maher, W. A.; And Others – Environmental Monitoring and Assessment, 1994
Presents a general framework for designing sampling programs that ensure cost effectiveness, and managed errors kept within known and acceptable limits. (LZ)
Descriptors: Cost Effectiveness, Environmental Education, Environmental Research, Error of Measurement
Previous Page | Next Page ยป
Pages: 1  |  2