NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Assessments and Surveys
Stanford Achievement Tests1
What Works Clearinghouse Rating
Showing 1 to 15 of 24 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Maxwell, Bronwen; Stevens, Anna; Demack, Sean; Coldwell, Mike; Wolstenholme, Claire; Reaney-Wood, Sarah; Stiell, Bernadette; Lortie-Forgues, Hugues – Education Endowment Foundation, 2021
The Education Endowment Foundation (EEF)'s mission is to break the link between family income and educational achievement. This is achieved through summarising the best available evidence in plain language, generating new evidence of 'what works' to improve teaching and learning, and supporting teachers and school leaders to use research evidence…
Descriptors: Foreign Countries, Disadvantaged Youth, Elementary School Students, Secondary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
McChesney, Katrina; Aldridge, Jill M. – Teacher Development, 2018
Schools and education systems are being challenged to improve the evaluation of teacher professional development, yet there is a lack of practical tools for doing so. This article describes the development and validation of a new instrument to assess teachers' perceptions of the impact of professional development. This instrument, designed to be…
Descriptors: Faculty Development, Program Evaluation, Professional Training, Questionnaires
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Stohlman, Trey – Journal of the Scholarship of Teaching and Learning, 2015
A good assessment plan combines many direct and indirect measures to validate the collected data. One often controversial assessment measure comes in the form of retention exams. Although assessment retention exams may come with faults, others advocate for their inclusion in program assessment. Objective-based tests may offer insight to…
Descriptors: Alternative Assessment, Retention (Psychology), Program Evaluation, Program Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Boeije, Hennie; Slagt, Meike; van Wesel, Floryt – Journal of Mixed Methods Research, 2013
In mixed methods research (MMR), integrating the quantitative and the qualitative components of a study is assumed to result in additional knowledge (or "yield"). This narrative review examines the extent to which MMR is used in the field of childhood trauma and provides directions for improving mixed methods studies in this field. A…
Descriptors: Mixed Methods Research, Research Methodology, Trauma, Literature Reviews
Peer reviewed Peer reviewed
Direct linkDirect link
Durden, Tonia R.; Mincemoyer, Claudia C.; Gerdes, Jennifer; Lodl, Kathleen – Journal of Extension, 2013
In recent years much attention has focused on the role of enhancing a teacher's professional knowledge and skills in helping to improve the quality of early care experiences for young children birth-5. In the study reported here, an environmental scan of the early childhood professional development programs offered within the Extension system…
Descriptors: Extension Education, Faculty Development, Early Childhood Education, Educational Quality
Chavez, Oscar; Papick, Ira; Ross, Dan J.; Grouws, Douglas A. – Online Submission, 2010
The purpose of this paper was to describe the process of development of assessment instruments for the Comparing Options in Secondary Mathematics: Investigating Curriculum (COSMIC) project. The COSMIC project was a three-year longitudinal comparative study focusing on evaluating high school students' mathematics learning from two distinct…
Descriptors: Mathematics Education, Mathematics Achievement, Interrater Reliability, Scoring Rubrics
Fitz-Gibbon, Carol Taylor; Morris, Lynn Lyons – 1987
The "CSE Program Evaluation Kit" is a series of nine books intended to assist people conducting program evaluations. This volume, the eighth in the kit, is divided into three sections, each dealing with an important function that quantitative analysis serves in evaluation: summarizing scores through measures of central tendency and…
Descriptors: Data Analysis, Evaluation Methods, Evaluation Problems, Evaluation Utilization
Dean, Linda M. – 1997
This paper proposes a model to establish which criteria are considered by stakeholders as valid for evaluating a program. The model is developed with the aim of increasing the credibility and use of evaluations. Stakeholders are involved in the identification of potential evaluation criteria, and ratings of validity and priority are used as the…
Descriptors: Criteria, Data Analysis, Data Collection, Evaluation Methods
Peer reviewed Peer reviewed
Brickell, Henry M.; Wong, Susan – Journal of Research and Development in Education, 1974
The Ohio Career Development Program was evaluated and students were tested in order to carry out that evaluation. (RK)
Descriptors: Career Education, Data Analysis, Educational Development, Educational Objectives
Johanson, George A.; Doston, Glenn – 1994
Analyses of questionnaire data from a program evaluation indicate that the two dichotomous items "Would you recommend this to a friend?" and "Would you choose to do this again?" are not as interchangeable as might be expected from the survey literature. As part of the evaluation of a university program, a survey of graduates…
Descriptors: College Graduates, Data Analysis, Graduate Surveys, Higher Education
Ellner, Carolyn L. – 1972
A training program carried out by the Center for Early Education (CEE) to prepare or upgrade the performance of 20 day-care administrators in Los Angeles County is discussed as to the program, evaluation, and findings. The program, consisting of 2 three-week workshops and six interim seminars, was designed to achieve 12 goals relating to child…
Descriptors: Administrators, Child Care, Data Analysis, Day Care
Schafer, William D. – 2000
The Department of Measurement, Statistics, and Evaluation (EDMS) at the University of Maryland is working to develop Master's degree programs that are oriented around developing assessment professionals for work in applied settings. Two fundamentally different sets of experiences are being developed: (1) assessment development, administration, and…
Descriptors: Data Analysis, Educational Assessment, Educational Testing, Evaluation Methods
Burstein, Leigh; Miller, M. David – 1981
Instructional and program relevance of measures of student performance and use of standardized achievement tests, as measures of the quality of a student's educational experiences, are discussed. This focus is consonant with current emphasis on linking testing and instruction and on systemic efforts at program and instructional improvement.…
Descriptors: Content Analysis, Data Analysis, Elementary Education, Grouping (Instructional Purposes)
Peer reviewed Peer reviewed
Direct linkDirect link
Killion, Joellen – Journal of Staff Development, 2003
Extensive practice and research in program evaluation have led to this eight-step process that a professional developer can use to determine a program's effectiveness and strengthen the program as it evolves. An unbiased, systematic process of evaluation can help justify putting resources into professional learning and make it easier to determine…
Descriptors: Program Evaluation, Faculty Development, Program Effectiveness, Program Improvement
Secolsky, Charles, Ed.; Denison, D. Brian, Ed. – Routledge, Taylor & Francis Group, 2011
Increased demands for colleges and universities to engage in outcomes assessment for accountability purposes have accelerated the need to bridge the gap between higher education practice and the fields of measurement, assessment, and evaluation. The "Handbook on Measurement, Assessment, and Evaluation in Higher Education" provides higher…
Descriptors: Generalizability Theory, Higher Education, Institutional Advancement, Teacher Effectiveness
Previous Page | Next Page ยป
Pages: 1  |  2