Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 9 |
Descriptor
Credibility | 16 |
Evaluation Methods | 16 |
Evaluation Criteria | 5 |
Data Collection | 4 |
Program Evaluation | 4 |
Evidence | 3 |
Research Methodology | 3 |
Academic Achievement | 2 |
College Instruction | 2 |
Critical Thinking | 2 |
Decision Making | 2 |
More ▼ |
Source
Author
Allison M. Teeter | 1 |
Argyrous, George | 1 |
Bowman, Ruth A. | 1 |
Bryson, John M. | 1 |
Duke, Nell K. | 1 |
Eugenia P. Gwynn | 1 |
Ewbank, Ann Dutton | 1 |
Felix, Joseph L. | 1 |
Goldhaber, Dan | 1 |
Gray, Jonathan M. | 1 |
Harris Douglas N. | 1 |
More ▼ |
Publication Type
Reports - Descriptive | 16 |
Journal Articles | 15 |
Opinion Papers | 1 |
Education Level
Higher Education | 2 |
Postsecondary Education | 2 |
Elementary Secondary Education | 1 |
Secondary Education | 1 |
Audience
Teachers | 1 |
Location
Australia | 1 |
Ohio (Cincinnati) | 1 |
Laws, Policies, & Programs
Elementary and Secondary… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Stephen Gorard – Review of Education, 2024
This paper describes, and lays out an argument for, the use of a procedure to help groups of reviewers to judge the quality of prior research reports. It argues why such a procedure is needed, and how other existing approaches are only relevant to some kinds of research, meaning that a review or synthesis cannot successfully combine quality…
Descriptors: Credibility, Research Reports, Evaluation Methods, Research Design
Marc T. Braverman – Journal of Human Sciences & Extension, 2019
This article examines the concept of credible evidence in Extension evaluations with specific attention to the measures and measurement strategies used to collect and create data. Credibility depends on multiple factors, including data quality and methodological rigor, characteristics of the stakeholder audience, stakeholder beliefs about the…
Descriptors: Extension Education, Program Evaluation, Evaluation Methods, Planning
Kenneth R. Jones; Eugenia P. Gwynn; Allison M. Teeter – Journal of Human Sciences & Extension, 2019
This article provides insight into how an adequate approach to selecting methods can establish credible and actionable evidence. The authors offer strategies to effectively support Extension professionals, including program developers and evaluators, in being more deliberate when selecting appropriate qualitative and quantitative methods. In…
Descriptors: Evaluation Methods, Credibility, Evidence, Evaluation Criteria
Johnson, Spencer T.; Ewbank, Ann Dutton – Knowledge Quest, 2018
One of the main responsibilities of school librarians is to teach students to evaluate the credibility of information. There is little evidence to suggest that students are being explicitly taught how to evaluate news obtained through social media. As avenues for giving and getting information evolve, so must ways of teaching students so that they…
Descriptors: Heuristics, Evaluation Methods, News Media, Social Media
Argyrous, George – Evidence & Policy: A Journal of Research, Debate and Practice, 2015
This paper illustrates the use of a quality assessment tool for regression analysis. It is designed for non-specialist "consumers" of evidence, such as policy makers. The tool provides a series of questions such consumers of evidence can ask to interrogate regression analysis, and is illustrated with reference to a recent study published…
Descriptors: Evaluation Methods, Regression (Statistics), Evidence, Critical Thinking
Goldhaber, Dan; Harris Douglas N.; Loeb, Susanna; McCaffrey, Daniel F.; Raudenbush, Stephen W. – Carnegie Foundation for the Advancement of Teaching, 2015
It is common knowledge that teacher quality is a key in-school factor affecting student achievement. While the quality of teaching clearly matters for how much students learn, this quality is challenging to measure. Evaluating teacher quality based on the level of their students' end-of-year test scores has been one method of assessing…
Descriptors: Teacher Effectiveness, Teacher Evaluation, Evaluation Methods, Measurement Techniques
Bryson, John M.; Patton, Michael Quinn; Bowman, Ruth A. – Evaluation and Program Planning, 2011
In the broad field of evaluation, the importance of stakeholders is often acknowledged and different categories of stakeholders are identified. Far less frequent is careful attention to analysis of stakeholders' interests, needs, concerns, power, priorities, and perspectives and subsequent application of that knowledge to the design of…
Descriptors: Evaluation Methods, Evaluation, Stakeholders, Interests
Zhang, Shenglan; Duke, Nell K.; Jimenez, Laura M. – Reading Teacher, 2011
This article introduces a framework designed to improve students' awareness of the need to critically evaluate websites as sources of information and to improve their skill at doing so. The framework, called the WWWDOT framework, encourages students to think about at least six dimensions when evaluating a website: (1) Who wrote this and what…
Descriptors: Credentials, Control Groups, Grade 5, Internet
Morgan, Philip – Journal of University Teaching and Learning Practice, 2008
The use of evaluation to examine and improve the quality of teaching and courses is now a component of most universities. However, despite the various methods and opportunities for evaluation, a lack of understanding of the processes, measures and value are some of the major impediments to effective evaluation. Evaluation requires an understanding…
Descriptors: Teacher Evaluation, Instructional Effectiveness, Flow Charts, Program Improvement

Williams, David D. – Educational Evaluation and Policy Analysis, 1986
Through a description and comparison of standards for evaluation and criteria for judging naturalistic inquires some potential conflicts in using naturalistic methods are identified. Analysis of problems suggests that compromises in the use of evaluation standards and criteria for naturalistic procedures are usually necessary. (Author/JAZ)
Descriptors: Conflict of Interest, Credibility, Ethics, Evaluation Criteria

Kling, Rob; McKim, Geoffrey – Journal of the American Society for Information Science, 1999
Discussion of electronic publishing and scholarly communication provides an analytical approach for evaluating disciplinary conventions and for proposing policies about scholarly electronic publishing. Considers Internet posting as prior publication; examines publicity, access, and trustworthiness; and considers the value of peer reviewing.…
Descriptors: Access to Information, Credibility, Electronic Publishing, Evaluation Methods
Leighton, Jacqueline P. – Educational Measurement: Issues and Practice, 2004
The collection of verbal reports is one way in which cognitive and developmental psychologists gather data to formulate and corroborate models of problem solving. The current use of verbal reports to design and validate educational assessments reflects the growing trend to fuse cognitive psychological research and educational measurement. However,…
Descriptors: Psychologists, Misconceptions, Measurement Techniques, Developmental Psychology
Tom, Alan R. – Texas Tech Journal of Education, 1983
Procedures of the National Council for Accreditation of Teacher Education are criticized, based on the review of an innovative program run jointly by Washington University and Maryville College in Missouri. Teams evaluating the same program at the two schools approved Washington University's and disapproved Maryville's. (PP)
Descriptors: Accreditation (Institutions), Accrediting Agencies, Agency Role, Cooperative Programs

Felix, Joseph L. – Educational Evaluation and Policy Analysis, 1979
Evaluation procedures and programs in the Cincinnati, Ohio school system, the role of local school evaluators, and the models for school evaluation--based on high, moderate, or low trust--are described. Evaluators serve local schools in formative and summative evaluation projects, in assessing needs, and in meeting them. (MH)
Descriptors: Credibility, Evaluation Methods, Evaluation Needs, Evaluators

Lowry, Robert C.; Silver, Brian D. – PS: Political Science and Politics, 1996
Asserts that variance between a university's reputation as an institution and its commitment to research have a greater impact on political science department rankings than any internal factors within the department. Includes several tables showing statistical variables of department and university rankings. (MJP)
Descriptors: Academic Education, Achievement Rating, Analysis of Variance, Credibility
Previous Page | Next Page ยป
Pages: 1 | 2