NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Type
Reports - Research10
Journal Articles5
Audience
What Works Clearinghouse Rating
Showing all 10 results Save | Export
Fazlul, Ishtiaque; Koedel, Cory; Parsons, Eric; Qian, Cheng – National Center for Analysis of Longitudinal Data in Education Research (CALDER), 2021
We evaluate the feasibility of estimating test-score growth with a gap year in testing data, informing the scenario when state testing resumes after the 2020 COVID-19-induced test stoppage. Our research design is to simulate a gap year in testing using pre-COVID-19 data--when a true test gap did not occur--which facilitates comparisons of…
Descriptors: Scores, Achievement Gains, Computation, Growth Models
Peer reviewed Peer reviewed
Direct linkDirect link
Pivovarova, Margarita; Amrein-Beardsley, Audrey – Educational Assessment, 2018
While states are no longer required to set up teacher evaluation systems based in significant part on student test scores, quite a few continue to use value-added (VAMs) or student growth percentile (SGP) models for that purpose. In this study, we analyzed three years of teacher data to illustrate the performance of teachers' median growth…
Descriptors: Growth Models, Teacher Evaluation, Value Added Models, Reliability
Close, Kevin; Amrein-Beardsley, Audrey; Collins, Clarin – Phi Delta Kappan, 2019
In 2016, the federal government proposed and adopted the Every Student Succeeds Act, which retracted the federal government's prior control over states' teacher evaluation systems, permitting more local control. Kevin Close, Audrey Amrein-Beardsley, and Clarin Collins collected information from states to determine the degree to which states were…
Descriptors: Teacher Evaluation, Elementary Secondary Education, Educational Legislation, Federal Legislation
Peer reviewed Peer reviewed
Direct linkDirect link
Ehlert, Mark; Koedel, Cory; Parsons, Eric; Podgursky, Michael – Educational Policy, 2016
The specifics of how growth models should be constructed and used for educational evaluation is a topic of lively policy debate in states and school districts nationwide. In this article, we take up the question of model choice--framed within a policy context--and examine three competing approaches. The first approach, reflected in the popular…
Descriptors: Growth Models, Value Added Models, Educational Assessment, Elementary Secondary Education
Peer reviewed Peer reviewed
Direct linkDirect link
Patrick, Drew – AASA Journal of Scholarship & Practice, 2016
New York State has used the Growth Model for Educator Evaluation ratings since the 2011-2012 school year. Since that time, student growth percentiles have been used as the basis for teacher and principal ratings. While a great deal has been written about the use of student test scores to measures educator effectiveness, less attention has been…
Descriptors: Teacher Evaluation, Teacher Effectiveness, Value Added Models, Multivariate Analysis
Schulte, Ann C.; Stevens, Joseph J.; Nese, Joseph F. T.; Yel, Nedim; Tindal, Gerald; Elliott, Stephen N. – National Center on Assessment and Accountability for Special Education, 2018
This technical report is one of a series of four technical reports that describe the results of a study comparing eight alternative models for estimating school academic achievement using data from the Arizona, North Carolina, Oregon, and Pennsylvania accountability systems. The purpose of these reports was to evaluate a broad range of models…
Descriptors: School Effectiveness, Models, Computation, Comparative Analysis
Nese, Joseph F. T.; Stevens, Joseph J.; Schulte, Ann C.; Tindal, Gerald; Yel, Nedim; Anderson, Daniel; Matta, Tyler; Elliott, Stephen N. – National Center on Assessment and Accountability for Special Education, 2018
This technical report is one of a series of four technical reports that describe the results of a study comparing eight alternative models for estimating school academic achievement using data from the Arizona, North Carolina, Oregon, and Pennsylvania accountability systems. The purpose of these reports was to evaluate a broad range of models…
Descriptors: School Effectiveness, Models, Computation, Comparative Analysis
Stevens, Joseph J.; Nese, Joseph F. T.; Schulte, Ann C.; Tindal, Gerald; Yel, Nedim; Anderson, Daniel; Matta, Tyler; Elliott, Stephen N. – National Center on Assessment and Accountability for Special Education, 2017
This technical report is one of a series of four technical reports that describe the results of a study comparing eight alternative models for estimating school academic achievement using data from the Arizona, North Carolina, Oregon, and Pennsylvania accountability systems. The purpose of these reports was to evaluate a broad range of models…
Descriptors: School Effectiveness, Models, Computation, Comparative Analysis
Schulte, Ann C.; Nese, Joseph F. T.; Stevens, Joseph J.; Yel, Nedim; Tindal, Gerald; Anderson, Daniel; Elliott, Stephen N. – National Center on Assessment and Accountability for Special Education, 2017
This technical report is one of a series of four technical reports that describe the results of a study comparing eight alternative models for estimating school academic achievement using data from the Arizona, North Carolina, Oregon, and Pennsylvania accountability systems. The purpose of these reports was to evaluate a broad range of models…
Descriptors: School Effectiveness, Models, Computation, Comparative Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Braun, Henry; Qu, Yanxuan – ETS Research Report Series, 2008
This paper reports on a study conducted to investigate the consistency of the results between 2 approaches to estimating school effectiveness through value-added modeling. Estimates of school effects from the layered model employing item response theory (IRT) scaled data are compared to estimates derived from a discrete growth model based on the…
Descriptors: Value Added Models, School Effectiveness, Robustness (Statistics), Computation