Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 7 |
Descriptor
Credibility | 16 |
Evaluation Methods | 16 |
Validity | 16 |
Reliability | 8 |
Research Methodology | 6 |
Elementary Secondary Education | 5 |
Models | 5 |
Program Evaluation | 5 |
Educational Assessment | 4 |
Educational Research | 4 |
Data Analysis | 3 |
More ▼ |
Source
Author
Scriven, Michael | 2 |
Ahn, Soyeon | 1 |
Ahn, Unhai R. | 1 |
Ames, Allison J. | 1 |
Coryn, Chris L. S. | 1 |
Denton, William T. | 1 |
Eastmond, Nick | 1 |
Guba, Egon G. | 1 |
Hartmann, David J. | 1 |
Hattie, John A. | 1 |
Hipps, Jerome A. | 1 |
More ▼ |
Publication Type
Journal Articles | 7 |
Reports - Evaluative | 7 |
Reports - Research | 6 |
Opinion Papers | 2 |
Speeches/Meeting Papers | 2 |
Dissertations/Theses -… | 1 |
Information Analyses | 1 |
Reports - Descriptive | 1 |
Tests/Questionnaires | 1 |
Education Level
Adult Education | 1 |
Elementary Education | 1 |
Elementary Secondary Education | 1 |
Grade 4 | 1 |
Higher Education | 1 |
Postsecondary Education | 1 |
Secondary Education | 1 |
Audience
Researchers | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 1 |
Progress in International… | 1 |
Trends in International… | 1 |
What Works Clearinghouse Rating
Marc T. Braverman – Journal of Human Sciences & Extension, 2019
This article examines the concept of credible evidence in Extension evaluations with specific attention to the measures and measurement strategies used to collect and create data. Credibility depends on multiple factors, including data quality and methodological rigor, characteristics of the stakeholder audience, stakeholder beliefs about the…
Descriptors: Extension Education, Program Evaluation, Evaluation Methods, Planning
Hitchcock, John H.; Johanson, George A. – Research in the Schools, 2015
Understanding the reason(s) for Differential Item Functioning (DIF) in the context of measurement is difficult. Although identifying potential DIF items is typically a statistical endeavor, understanding the reasons for DIF (and item repair or replacement) might require investigations that can be informed by qualitative work. Such work is…
Descriptors: Mixed Methods Research, Test Items, Item Analysis, Measurement
Rutkowski, David – Assessment in Education: Principles, Policy & Practice, 2018
In this article I advocate for a new discussion in the field of international large-scale assessments; one that calls for a reexamination of international large-scale assessments (ILSAs) and their use. Expanding on the high-quality work in this special issue I focus on three inherent limitations to international large-scale assessments noted by…
Descriptors: Grade 4, Foreign Countries, Achievement Tests, Reading Achievement
Orem, Chris D. – ProQuest LLC, 2012
Meta-assessment, or the assessment of assessment, can provide meaningful information about the trustworthiness of an academic program's assessment results (Bresciani, Gardner, & Hickmott, 2009; Palomba & Banta, 1999; Suskie, 2009). Many institutions conduct meta-assessments for their academic programs (Fulcher, Swain, & Orem, 2012),…
Descriptors: Validity, Evidence, Evaluation Methods, Meta Analysis
Sandoval, William A.; Sodian, Beate; Koerber, Susanne; Wong, Jacqueline – Educational Psychologist, 2014
Science educators have long been concerned with how formal schooling contributes to learners' capacities to engage with science after school. This article frames productive engagement as fundamentally about the coordination of claims with evidence, but such coordination requires a number of reasoning capabilities to evaluate the strength of…
Descriptors: Science Teachers, Science Instruction, Science Process Skills, Competence
Ahn, Soyeon; Ames, Allison J.; Myers, Nicholas D. – Review of Educational Research, 2012
The current review addresses the validity of published meta-analyses in education that determines the credibility and generalizability of study findings using a total of 56 meta-analyses published in education in the 2000s. Our objectives were to evaluate the current meta-analytic practices in education, identify methodological strengths and…
Descriptors: Inferences, Meta Analysis, Educational Practices, Research Methodology
Coryn, Chris L. S.; Hattie, John A.; Scriven, Michael; Hartmann, David J. – American Journal of Evaluation, 2007
This research describes, classifies, and comparatively evaluates national models and mechanisms used to evaluate research and allocate research funding in 16 countries. Although these models and mechanisms vary widely in terms of how research is evaluated and financed, nearly all share the common characteristic of relating funding to some measure…
Descriptors: Ethics, Evaluation Methods, Comparative Analysis, Resource Allocation

McNamara, James F.; McNamara, Maryanne – International Journal of Educational Reform, 1999
Stresses two essential characteristics that principals must keep in mind when constructing measures that yield accurate and relevant evaluation data. When constructing quantitative measures, validity and reliability are most important considerations. When designing qualitative measures, credibility and dependability are most important. (14…
Descriptors: Credibility, Elementary Secondary Education, Evaluation Methods, Measurement Techniques
Johnston, J. Howard – 1983
The relevance of ethnographic research to evaluation of educational programs is discussed. The author focuses on evaluation questions that deal with finding out what is happening in a program and if what is happening is desirable. Ethnography suits the purposes of evaluators who answer such questions because they want to know how something works…
Descriptors: Credibility, Elementary Secondary Education, Ethnography, Evaluation Methods
Scriven, Michael – 1975
Selected aspects of the problem of obtaining unbiased program or product evaluation are discussed. An evaluator who is a member of the project staff will have difficulty producing an evaluation which is credible and valid. Project monitors will also have a problem since they are often required to assume the conflicting roles of external evaluator…
Descriptors: Administrative Problems, Bias, Credibility, Evaluation Methods
Guba, Egon G. – 1978
Evaluation is viewed as essential to decision making and social policy development. Since conventional methods have been disappointing or inadequate, naturalistic inquiry (N/I) differs from conventional science in minimizing constraints on antecedent conditions (controls) and on output (dependent variables). N/I is phenomenological rather than…
Descriptors: Credibility, Educational Assessment, Evaluation Criteria, Evaluation Methods
Denton, William T.; Murray, Wayne R. – 1975
This paper describes six existing evaluator-auditor working formats and the conditions which foster credibility of evaluation findings. Evaluators were classified as: (1) member of project developmental team, accountable to project director; (2) independent internal evaluator, accountable to system in general but not to project directors, and (3)…
Descriptors: Accountability, Credibility, Data Analysis, Educational Research
Hipps, Jerome A. – 1993
New methods are needed to judge the quality of alternative student assessment, methods which complement the philosophy underlying authentic assessments. This paper examines assumptions underlying validity, reliability, and objectivity, and why they are not matched to authentic assessment, concentrating on the constructivist paradigm of E. Guba and…
Descriptors: Alternative Assessment, Constructivism (Learning), Credibility, Educational Assessment
Lincoln, Yvonna S. – 1986
This paper presents criteria for establishing the trustworthiness of naturalistic inquiries, and specific techniques to facilitate their achievement or determine the degree of their achievement. The following criteria are briefly described: fairness; and ontological, educative, catalytic and tactical authenticity. Explored in greater detail,…
Descriptors: Credibility, Data Collection, Educational Research, Epistemology
Eastmond, Nick; Wood, R. Kent – 1993
Definitions and short explanations of key concepts in the field of instructional evaluation are presented. Evaluation is defined as the process of determining the value of programs, projects, materials, and personnel. It is distinguished from research in that the aims of research are less time- and situation-specific, attempting to uncover…
Descriptors: Credibility, Curriculum Evaluation, Definitions, Educational Assessment
Previous Page | Next Page ยป
Pages: 1 | 2