NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Keller, Bryan S. B.; Kim, Jee-Seon; Steiner, Peter M. – Society for Research on Educational Effectiveness, 2013
Propensity score analysis (PSA) is a methodological technique which may correct for selection bias in a quasi-experiment by modeling the selection process using observed covariates. Because logistic regression is well understood by researchers in a variety of fields and easy to implement in a number of popular software packages, it has…
Descriptors: Probability, Scores, Statistical Analysis, Statistical Bias
Cason, Gerald J.; Cason, Carolyn L. – 1989
The use of three remedies for errors in the measurement of ability that arise from differences in rater stringency is discussed. Models contrasted are: (1) Conventional; (2) Handicap; and (3) deterministic Rater Response Theory (RRT). General model requirements, power, bias of measures, computing cost, and complexity are contrasted. Contrasts are…
Descriptors: Ability, Achievement Rating, Error of Measurement, Evaluation Methods
Peer reviewed Peer reviewed
Willms, J. Douglas; Raudenbush, Stephen W. – Journal of Educational Measurement, 1989
A general longitudinal model is presented for estimating school effects and their stability. The model, capable of separating true changes from sampling and measurement error, controls statistically for effects of factors exogenous to the school system. The model is illustrated with data from large cohorts of students in Scotland. (SLD)
Descriptors: Elementary Secondary Education, Equations (Mathematics), Error of Measurement, Estimation (Mathematics)
Peer reviewed Peer reviewed
Corder-Bolz, Charles R. – Educational and Psychological Measurement, 1978
Six models for evaluating change are examined via a Monte Carlo study. All six models show a lack of power. A modified analysis of variance procedure is suggested as an alternative. (JKS)
Descriptors: Analysis of Covariance, Analysis of Variance, Educational Change, Error of Measurement
Hill, Richard – 1997
In the Spring, 1996, issue of "CRESST Line," E. Baker and R. Linn commented that, in efforts to measure the progress of schools, "the fluctuations due to differences in the students themselves could conceal differences in instructional effects." This is particularly true in the context of the evaluation of adequate yearly…
Descriptors: Academic Achievement, Compensatory Education, Disadvantaged Youth, Educational Improvement
Peer reviewed Peer reviewed
Raymond, Mark R.; Viswesvaran, Chockalingam – Journal of Educational Measurement, 1993
Three variations of a least squares regression model are presented that are suitable for determining and correcting for rating error in designs in which examinees are evaluated by a subset of possible raters. Models are applied to ratings from 4 administrations of a medical certification examination in which 40 raters and approximately 115…
Descriptors: Error of Measurement, Evaluation Methods, Higher Education, Interrater Reliability
Cason, Gerald J.; And Others – 1983
Prior research in a single clinical training setting has shown Cason and Cason's (1981) simplified model of their performance rating theory can improve rating reliability and validity through statistical control of rater stringency error. Here, the model was applied to clinical performance ratings of 14 cohorts (about 250 students and 200 raters)…
Descriptors: Clinical Experience, Error of Measurement, Evaluation Methods, Higher Education
Raffeld, Paul; And Others – 1979
The RMC Model A (norm-referenced) for evaluation of Title I programs is based upon the equipercentile assumption--that students maintain their percentile rank over a one-year period, provided that no special instrucional intervention is introduced. The control group, essentially the sample used to standardize the achievement test, represents the…
Descriptors: Achievement Gains, Critical Path Method, Elementary Education, Error of Measurement
Echternacht, Gary – 1979
The role that measurement error plays in the regression effect is discussed with particular reference to the RMC models for evaluating the effects of Elementary Secondary Education Act Title I programs. The norm referenced evaluation model assumes a theory of growth where the relative ranking of students remains the same from pretest to posttest…
Descriptors: Achievement Gains, Comparative Analysis, Educational Assessment, Elementary Secondary Education
Peer reviewed Peer reviewed
Young, John W. – Journal of Research in Education, 1992
Uses the general linear model to develop an adjusted cumulative grade point average (GPA) that systematically models grading effects among courses. A validation study using 778 courses of 1,564 Stanford (California) University students shows an increase in predictability of the adjusted least-squares GPA over the unadjusted GPA. (SLD)
Descriptors: Academic Achievement, Admission (School), Course Selection (Students), Error of Measurement