NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)0
Since 2006 (last 20 years)12
Laws, Policies, & Programs
School to Work Opportunities…1
What Works Clearinghouse Rating
Showing 1 to 15 of 16 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Hagans, Kristi S.; Powers, Kristin – Action in Teacher Education, 2015
The Council for the Accreditation of Educator Preparation (CAEP) requires faculty from educator preparation programs to provide evidence of credential candidates' impact on K-12 student learning. However, there is a paucity of information on preparation programs' use of direct assessments of student learning to gauge credential candidate…
Descriptors: Credentials, Academic Achievement, Teacher Effectiveness, Teacher Certification
Peer reviewed Peer reviewed
Direct linkDirect link
Stufflebeam, Daniel L. – Journal of MultiDisciplinary Evaluation, 2011
Good evaluation requires that evaluation efforts themselves be evaluated. Many things can and often do go wrong in evaluation work. Accordingly, it is necessary to check evaluations for problems such as bias, technical error, administrative difficulties, and misuse. Such checks are needed both to improve ongoing evaluation activities and to assess…
Descriptors: Program Evaluation, Evaluation Criteria, Evaluation Methods, Definitions
Peer reviewed Peer reviewed
Direct linkDirect link
Stockard, Jean – Current Issues in Education, 2010
A large body of literature documents the central importance of fidelity of program implementation in creating an internally valid research design and considering such fidelity in judgments of research quality. The What Works Clearinghouse (WWC) provides web-based summary ratings of educational innovations and is the only rating group that is…
Descriptors: Research Design, Educational Innovation, Program Implementation, Program Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Harmon, Oskar R.; Lambrinos, James; Buffolino, Judy – Online Journal of Distance Learning Administration, 2010
Many consider online courses to be an inferior alternative to traditional face-to-face (f2f) courses because exam cheating is thought to occur more often in online courses. This study examines how the assessment design in online courses contributes to this perception. Following a literature review, the assessment design in a sample of online…
Descriptors: Electronic Learning, Student Attitudes, Cheating, Online Courses
Peer reviewed Peer reviewed
Direct linkDirect link
Sridharan, Sanjeev – American Journal of Evaluation, 2008
This article describes the design and evaluation approaches to address the complexity posed by systems change initiatives. The role of evaluations in addressing the following issues is briefly reviewed: moving from strategic planning to implementation, impacts on system-level coordination, anticipated timeline of impact, and individual level…
Descriptors: Strategic Planning, Case Studies, Reader Response, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
House, Ernest R. – American Journal of Evaluation, 2008
Drug studies are often cited as the best exemplars of evaluation design. However, many of these studies are seriously biased in favor of positive findings for the drugs evaluated, even to the point where dangerous effects are hidden. In spite of using randomized designs and double blinding, drug companies have found ways of producing the results…
Descriptors: Integrity, Evaluation Methods, Program Evaluation, Experimenter Characteristics
Peer reviewed Peer reviewed
Direct linkDirect link
Swain, Jon; Brown, Margaret; Coben, Diana; Rhodes, Valerie; Ananiadou, Katerina; Brown, Peter – Research in Post-Compulsory Education, 2008
This paper describes the process of designing and administering a sufficiently valid and reliable assessment instrument to measure the progress in attainment of adult learners studying numeracy, and discusses some of the inherent difficulties that were involved. The fieldwork took place during 2003-2005 and involved a sample of 34 teachers and 412…
Descriptors: Numeracy, Adult Basic Education, Adult Learning, Psychometrics
Xu, Zeyu; Nichols, Austin – National Center for Analysis of Longitudinal Data in Education Research, 2010
The gold standard in making causal inference on program effects is a randomized trial. Most randomization designs in education randomize classrooms or schools rather than individual students. Such "clustered randomization" designs have one principal drawback: They tend to have limited statistical power or precision. This study aims to…
Descriptors: Test Format, Reading Tests, Norm Referenced Tests, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Shi, Yan; Tsang, Mun C. – Educational Research Review, 2008
This is a critical review of methodological issues in the evaluation of adult literacy education programs in the United States. It addresses the key research questions: What are the appropriate methods for evaluating these programs under given circumstances. It identifies 15 evaluation studies that are representative of a range of adult literacy…
Descriptors: Program Effectiveness, Adult Literacy, Adult Education, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Bamberger, Michael; White, Howard – Journal of MultiDisciplinary Evaluation, 2007
The purpose of this article is to extend the discussion of issues currently being debated on the need for more rigorous program evaluation in educational and other sectors of research, to the field of international development evaluation, reviewing the different approaches which can be adopted to rigorous evaluation methodology and their…
Descriptors: Program Evaluation, Evaluation Methods, Evaluation Research, Convergent Thinking
Peer reviewed Peer reviewed
Direct linkDirect link
Kogan, Jennifer R.; Shea, Judy A. – Teaching and Teacher Education: An International Journal of Research and Studies, 2007
Course evaluation is integral to medical education. We discuss (1) distinctive features of medical education that impact on course evaluation, (2) a framework for course evaluations, (3) details that shape the evaluation process, (4) key measurement issues important to data gathering and interpretation, and (5) opportunities for expanding the…
Descriptors: Course Evaluation, Medical Education, Fundamental Concepts, Research Design
Peer reviewed Peer reviewed
Robottom, Ian – Journal of Research in Science Teaching, 1989
Discussed is the issue of the appropriateness of applied science approaches to evaluation in environmental education. The relationships between characteristics of applied science approaches to evaluation and the special characteristics of environmental education are explored. (Author/YP)
Descriptors: Environmental Education, Evaluation, Evaluation Criteria, Evaluation Problems
Glazerman, Steven; Myers, David – Mathematica Policy Research, Inc., 2004
In October 2002, the Institute of Education Sciences (IES) contracted with Mathematica Policy Research, Inc. (MRP) to help identify issues pertinent to the evaluation of Title I and to propose feasible evaluation design strategies. This design effort took its lead from two sources: (1) the Title I Independent Review Panel (IRP); and (2) a more…
Descriptors: Research Design, Educational Change, Evaluation Methods, Intervention
Department of Education, Washington, DC. – 1997
This volume brings together the reports and discussion connected with a roundtable convened by the Departments of Education and Labor and the National School-to-Work Office to discuss issues surrounding the conduct of a net impact evaluation of school-to-work. After an introduction that summarizes the approach and intentions of the School-to-Work…
Descriptors: Education Work Relationship, Educational Research, Evaluation Problems, Evaluation Research
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Strachota, Elaine M.; Conceicao, Simone C. O.; Schmidt, Steven W. – New Horizons in Adult Education & Human Resource Development, 2006
This article describes the use of a schematic model for developing and distributing online surveys. Two empirical studies that developed and implemented online surveys to collect data to measure satisfaction in various aspects of human resource development and adult education exemplify the use of the model to conduct online survey research. The…
Descriptors: Adult Education, Research Methodology, Human Resources, Online Surveys
Previous Page | Next Page ยป
Pages: 1  |  2