NotesFAQContact Us
Collection
Advanced
Search Tips
Assessments and Surveys
Florida Comprehensive…1
What Works Clearinghouse Rating
Showing 1 to 15 of 19 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ole J. Kemi – Advances in Physiology Education, 2025
Students are assessed by coursework and/or exams, all of which are marked by assessors (markers). Student and marker performances are then subject to end-of-session board of examiner handling and analysis. This occurs annually and is the basis for evaluating students but also the wider learning and teaching efficiency of an academic institution.…
Descriptors: Undergraduate Students, Evaluation Methods, Evaluation Criteria, Academic Standards
Patrick C. Kyllonen; Amit Sevak; Teresa Ober; Ikkyu Choi; Jesse Sparks; Daniel Fishtein – ETS Research Institute, 2024
Assessment refers to a broad array of approaches for measuring or evaluating a person's (or group of persons') skills, behaviors, dispositions, or other attributes. Assessments range from standardized tests used in admissions, employee selection, licensure examinations, and domestic and international largescale assessments of cognitive and…
Descriptors: Performance Based Assessment, Evaluation Criteria, Evaluation Methods, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Tassé, Marc J.; Luckasson, Ruth; Schalock, Robert L. – Intellectual and Developmental Disabilities, 2016
Intellectual disability originates during the developmental period and is characterized by significant limitations both in intellectual functioning and in adaptive behavior as expressed in conceptual, social, and practical adaptive skills. In this article, we present a brief history of the diagnostic criteria of intellectual disability for both…
Descriptors: Intellectual Disability, Adjustment (to Environment), Intellectual Development, Educational Diagnosis
Peer reviewed Peer reviewed
Direct linkDirect link
Gottlieb, Derek; Moroye, Christy M. – Journal of Curriculum and Pedagogy, 2016
We examine the reliance on rubrics for educational evaluation and explore whether such tools fulfill their promise. Following Wittgensteinian critical strategies, we explore what "the application of the [rubric] picture looks like" and then evaluate (a) whether those benefits are attributable to rubric use at all, and (b) whether any of…
Descriptors: Scoring Rubrics, Educational Assessment, Student Evaluation, Educational Benefits
Peer reviewed Peer reviewed
Direct linkDirect link
Rhoads, Christopher – Journal of Research on Educational Effectiveness, 2016
Experimental evaluations that involve the educational system usually involve a hierarchical structure (students are nested within classrooms that are nested within schools, etc.). Concerns about contamination, where research subjects receive certain features of an intervention intended for subjects in a different experimental group, have often led…
Descriptors: Educational Experiments, Error of Measurement, Research Design, Statistical Analysis
Goldhaber, Dan; Loeb, Susanna – Carnegie Foundation for the Advancement of Teaching, 2013
Better teacher evaluation should lead to better instruction and improved outcomes for students, but more accurate classification of teachers requires better information than is now available. Because existing measures of performance are incomplete and imperfect, measured performance does not always reflect true performance. Teachers who are truly…
Descriptors: Personnel Management, Personnel Policy, Teacher Evaluation, Teacher Effectiveness
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wang, Binhong – English Language Teaching, 2010
This paper first analyzed two studies on rater factors and rating criteria to raise the problem of rater agreement. After that the author reveals the causes of discrepencies in rating administration by discussing rater variability and rater bias. The author argues that rater bias can not be eliminated completely, we can only reduce the error to a…
Descriptors: Interrater Reliability, Examiners, Training, Bias
Sturgis, Chris – International Association for K-12 Online Learning, 2014
This paper is part of a series investigating the implementation of competency education. The purpose of the paper is to explore how districts and schools can redesign grading systems to best help students to excel in academics and to gain the skills that are needed to be successful in college, the community, and the workplace. In order to make the…
Descriptors: Grading, Competency Based Education, Evaluation Methods, Evaluation Research
Isenberg, Eric; Hock, Heinrich – Mathematica Policy Research, Inc., 2011
This report presents the value-added models that will be used to measure school and teacher effectiveness in the District of Columbia Public Schools (DCPS) in the 2010-2011 school year. It updates the earlier technical report, "Measuring Value Added for IMPACT and TEAM in DC Public Schools." The earlier report described the methods used…
Descriptors: Public Schools, Teacher Effectiveness, School Effectiveness, Models
Harris, Douglas N. – Policy Analysis for California Education, PACE (NJ3), 2010
In this policy brief, the author explores the problems with attainment measures when it comes to evaluating performance at the school level, and explores the best uses of value-added measures. These value-added measures, the author writes, are useful for sorting out-of-school influences from school influences or from teacher performance, giving…
Descriptors: Principals, Observation, Teacher Evaluation, Measurement Techniques
Froman, Terry – Research Services, Miami-Dade County Public Schools, 2007
Because 3rd Grade Florida Comprehensive Assessment Test (FCAT) scores have a direct impact on promotion, the results for that grade level are released early by the State. When the FCAT results for 3rd Grade were released in May 2007, many people were troubled. Over 80% of the elementary schools in the Miami-Dade School District showed a decrease…
Descriptors: Reading Achievement, Scoring, Grade 3, Academic Achievement
Karkee, Thakur B.; Wright, Karen R. – Online Submission, 2004
Different item response theory (IRT) models may be employed for item calibration. Change of testing vendors, for example, may result in the adoption of a different model than that previously used with a testing program. To provide scale continuity and preserve cut score integrity, item parameter estimates from the new model must be linked to the…
Descriptors: Measures (Individuals), Evaluation Criteria, Testing, Integrity
Peer reviewed Peer reviewed
Berk, Ronald A. – Review of Educational Research, 1986
Thirty-eight methods are presented for either setting standards or adjusting them based on an analysis of classification error rates. A trilevel classification scheme is used to categorize the methods, and 10 criteria of technical adequacy and practicability are proposed to evaluate them. (Author/LMO)
Descriptors: Criterion Referenced Tests, Cutting Scores, Elementary Secondary Education, Error of Measurement
Brandt, David A. – 1982
This report describes and evaluates the major computer software packages capable of computing standard errors for statistics estimated from complex samples. It first describes the problem and the proposed solutions. The two major programs presently available, SUPER CARP and OSIRIS, are described in general terms. The kinds of statistics available…
Descriptors: Analysis of Variance, Cluster Analysis, Computer Software Reviews, Correlation
Lance, Charles E.; Moomaw, Michael E. – 1983
Direct assessments of the accuracy with which raters can use a rating instrument are presented. This study demonstrated how surplus behavioral incidents scaled during the development of Behaviorally Anchored Rating Scales (BARS) can be used effectively in the evaluation of the newly developed scales. Construction of scenarios of hypothetical…
Descriptors: Behavior Rating Scales, Comparative Analysis, Error of Measurement, Evaluation Criteria
Previous Page | Next Page »
Pages: 1  |  2