Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 13 |
Descriptor
Source
Author
McBain, Susan L. | 2 |
Agee, Patsy | 1 |
Baker, Octave V. | 1 |
Bham, Mohammed | 1 |
Brickman, Peggy | 1 |
Ceder, Ineke | 1 |
Charmaraman, Linda | 1 |
Clark, Fredric A. | 1 |
Comeau, Anne Marie | 1 |
Coryn, Chris L. S. | 1 |
Coyne, Molly | 1 |
More ▼ |
Publication Type
Reports - Descriptive | 32 |
Journal Articles | 14 |
Reports - Evaluative | 4 |
Speeches/Meeting Papers | 3 |
Guides - Non-Classroom | 2 |
Collected Works - Proceedings | 1 |
Tests/Questionnaires | 1 |
Audience
Researchers | 3 |
Practitioners | 2 |
Policymakers | 1 |
Teachers | 1 |
Laws, Policies, & Programs
Education Amendments 1974 | 3 |
Assessments and Surveys
What Works Clearinghouse Rating
Hipólito-Nájera, A. Ricardo; Moya-Hernandez, M. Rosario; Gomez-Balderas, Rodolfo; Rojas-Hernandez, Alberto; Romero-Romo, Mario – Journal of Chemical Education, 2017
Validation of analytical methods is a fundamental subject for chemical analysts working in chemical industries. These methods are also relevant for pharmaceutical enterprises, biotechnology firms, analytical service laboratories, government departments, and regulatory agencies. Therefore, for undergraduate students enrolled in majors in the field…
Descriptors: Program Validation, Evaluation Methods, Chemistry, Laboratory Experiments
What Works Clearinghouse, 2017
The What Works Clearinghouse (WWC) evaluates research studies that look at the effectiveness of education programs, products, practices, and policies, which the WWC calls "interventions." Many studies of education interventions make claims about impacts on students' outcomes. Some studies have designs that enable readers to make causal…
Descriptors: Program Design, Program Development, Program Effectiveness, Program Evaluation
Richer, Amanda; Charmaraman, Linda; Ceder, Ineke – Afterschool Matters, 2018
Like instruments used in afterschool programs to assess children's social and emotional growth or to evaluate staff members' performance, instruments used to evaluate program quality should be free from bias. Practitioners and researchers alike want to know that assessment instruments, whatever their type or intent, treat all people fairly and do…
Descriptors: Cultural Differences, Social Bias, Interrater Reliability, Program Evaluation
Gargani, John; Strong, Michael – Journal of Teacher Education, 2015
In Gargani and Strong (2014), we describe The Rapid Assessment of Teacher Effectiveness (RATE), a new teacher evaluation instrument. Our account of the validation research associated with RATE inspired a review by Good and Lavigne (2015). Here, we reply to the main points of their review. We elaborate on the validity, reliability, theoretical…
Descriptors: Evidence, Teacher Effectiveness, Teacher Evaluation, Evaluation Methods
Landry, Jeffrey P.; Pardue, J. Harold; Daigle, Roy; Longenecker, Herbert E., Jr. – Information Systems Education Journal, 2013
This paper describes an instrument designed for assessing learning outcomes in data management. In addition to assessment of student learning and ABET outcomes, we have also found the instrument to be effective for determining database placement of incoming information systems (IS) graduate students. Each of these three uses is discussed in this…
Descriptors: Database Management Systems, Evaluation Methods, Graduate Students, Information Systems
Gormally, Cara; Brickman, Peggy; Lutz, Mary – CBE - Life Sciences Education, 2012
Life sciences faculty agree that developing scientific literacy is an integral part of undergraduate education and report that they teach these skills. However, few measures of scientific literacy are available to assess students' proficiency in using scientific literacy skills to solve scenarios in and beyond the undergraduate biology classroom.…
Descriptors: Testing, Biology, Undergraduate Study, Educational Change
Dunlap, Laurie A. – MathAMATYC Educator, 2012
This article describes how to design program assessment for mathematics departments, in two-year and four-year colleges across the Midwest, based on a set of components that was generated from a Delphi survey. An example is provided to illustrate how this was done at a small four-year college. There is an alignment between these components and a…
Descriptors: Mathematics Instruction, Program Evaluation, Program Design, Research Design
Coryn, Chris L. S. – Journal of MultiDisciplinary Evaluation, 2007
The author discusses validation hierarchies grounded in the tradition of quantitative research that generally consists of the criteria of validity, reliability and objectivity and compares this with similar criteria developed by the qualitative tradition, described as trustworthiness, dependability and confirmability. Although these quantitative…
Descriptors: Research Methodology, Statistical Analysis, Value Judgment, Qualitative Research
Febey, Karen; Coyne, Molly – American Journal of Evaluation, 2007
The field of program evaluation lacks interactive teaching tools. To address this pedagogical issue, the authors developed a collaborative learning technique called Program Evaluation: The Board Game. The authors present the game and its development in this practitioner-oriented article. The evaluation board game is an adaptable teaching tool…
Descriptors: Teaching Methods, Program Evaluation, Evaluation Methods, Cooperative Learning
Li, Jianghong; D'Angiulli, Amedeo; Kendall, Garth E. – Early Years: An International Journal of Research and Development, 2007
The Early Development Index (EDI) is a teacher-completed checklist, intended to be a population-level tool to measure children's readiness for school and to alert communities to potential developmental problems in children. In response to the increasing popularity of the EDI, this paper provides a critical and timely evaluation and identifies the…
Descriptors: Check Lists, Developmental Disabilities, Psychometrics, School Readiness
Southwest Educational Development Lab., Austin, TX. – 1979
This report documents the proceedings of a conference on the validation of exemplary educational programs, products, and practices presented by the Regional Exchange of the Southwest Educational Development Laboratory (SEDL/RX). It is different from other conference syntheses, as it includes the process for creating the conference as well as the…
Descriptors: Conferences, Evaluation Methods, Guides, Program Validation
Pass, Kenneth; Green, Nancy S.; Lorey, Fred; Sherwin, John; Comeau, Anne Marie – Mental Retardation and Developmental Disabilities Research Reviews, 2006
The term "pilot study" has been used over the years to describe the evaluation of the many elements involved in deciding whether a proposed condition should be added to a newborn screening (NBS) panel, and until recently, was unilaterally used to describe the evaluation of the assay to be used before the condition was officially adopted by a state…
Descriptors: Pilot Projects, Neonates, Screening Tests, Infant Care
Timmins, Paul; Bham, Mohammed; McFadyen, Jane; Ward, Joanna – Educational Psychology in Practice, 2006
In this article we describe how the RADIO process enabled EPiTs to negotiate research with an EPS around its desire to evaluate and develop its consultation work with schools. Findings of the evaluation and their implications for the Service are described and the potential of RADIO as a tool for providing external research support from HEIs for…
Descriptors: Research and Development, Evaluation Methods, Educational Psychology, Action Research

Miller, Allen H. – Studies in Higher Education, 1984
Possible reasons for conducting evaluations of college and university courses are outlined, and the evaluation system developed at the Australian National University is described. Sources of information for teachers to use in making course improvements are suggested. (MSE)
Descriptors: Course Evaluation, Curriculum Development, Data Collection, Evaluation Methods
Stapleton, Paul; Helms-Park, Rena – English for Specific Purposes, 2006
This paper introduces the Website Acceptability Tiered Checklist (WATCH), a preliminary version of a multi-trait scale that could be used by instructors and students to assess the quality of websites chosen as source materials in students' research papers in a Humanities program. The scale includes bands for assessing: (i) the authority and…
Descriptors: Information Sources, Web Sites, English for Academic Purposes, Check Lists