NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers3
Laws, Policies, & Programs
Assessments and Surveys
Wisconsin Knowledge and…1
What Works Clearinghouse Rating
Showing 1 to 15 of 52 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kinarsky, Alana R.; Christie, Christina A. – American Journal of Evaluation, 2022
Since 2007, two taxonomies have been proposed to identify the components of evaluation practice that may be specified in an evaluation policy. Little is known, however, about how these taxonomies align with evaluation policies developed by philanthropic foundations. Through thematic analysis, this article first compares 12 foundation evaluation…
Descriptors: Taxonomy, Evaluation Methods, Philanthropic Foundations, Educational Policy
Mojgan Rashtchi; SeyyedeFateme Ghazi Mir Saeed – Sage Research Methods Cases, 2023
The reason for conducting the present case study was the problems the researchers encountered during data collection for another research project (Primary Study) entitled "The effects of virtual versus traditional flipped classes on EFL learners' grammar knowledge, self-regulation, and autonomy." Two online questionnaires were…
Descriptors: Data Collection, Questionnaires, Barriers, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Bennett, Cary – Learning and Teaching: The International Journal of Higher Education in the Social Sciences, 2016
Assessment rubrics are being promoted and introduced into tertiary teaching practices on the grounds that they are an efficient and reliable tool to evaluate student performance effectively and promote student learning. However, there has been little discussion on the value of using assessment rubrics in higher education. Rather, they are being…
Descriptors: Scoring Rubrics, Evaluation Methods, Student Evaluation, Higher Education
Peer reviewed Peer reviewed
Direct linkDirect link
Levin, Henry M.; Belfield, Clive – Journal of Research on Educational Effectiveness, 2015
Cost-effectiveness analysis is rarely used in education. When it is used, it often fails to meet methodological standards, especially with regard to cost measurement. Although there are occasional criticisms of these failings, we believe that it is useful to provide a listing of the more common concerns and how they might be addressed. Based upon…
Descriptors: Cost Effectiveness, Comparative Analysis, Validity, Educational Policy
Peer reviewed Peer reviewed
Direct linkDirect link
Praetorius, Anna-Katharina; Lenske, Gerlinde; Helmke, Andreas – Learning and Instruction, 2012
Despite considerable interest in the topic of instructional quality in research as well as practice, little is known about the quality of its assessment. Using generalizability analysis as well as content analysis, the present study investigates how reliably and validly instructional quality is measured by observer ratings. Twelve trained raters…
Descriptors: Student Teachers, Interrater Reliability, Content Analysis, Observation
Peer reviewed Peer reviewed
Direct linkDirect link
Panaretos, John; Malesios, Chrisovaladis C. – Measurement: Interdisciplinary Research and Perspectives, 2012
In their article Ruscio et al. (Ruscio, Seaman, D'Oriano, Stremlo, & Mahalchik, this issue) present a comparative study of some of the different variants of the "h" index. The study evaluates a total of 22 metrics, including the "h" index and "h"-type indices, as well as other conventional measures. The novelty of their work is to a large extent…
Descriptors: Comparative Analysis, Usability, Statistical Analysis, Productivity
Peer reviewed Peer reviewed
Direct linkDirect link
Cacioppo, John T.; Cacioppo, Stephanie – Measurement: Interdisciplinary Research and Perspectives, 2012
Ruscio and colleagues (Ruscio, Seaman, D'Oriano, Stremlo, & Mahalchik, this issue) provide a thoughtful empirical analysis of 22 different measures of individual scholarly impact. The simplest metric is number of publications, which Simonton (1997) found to be a reasonable predictor of career trajectories. Although the assessment of the scholarly…
Descriptors: Measurement, Outcome Measures, Scholarship, Bibliometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Porter, Theodore M. – Measurement: Interdisciplinary Research and Perspectives, 2012
Ruscio et al. (Ruscio, Seaman, D'Oriano, Stremlo, & Mahalchik, this issue) write of a thing with which scientists and scholars are all too familiar, the assessment of published research and of its authors. The author was startled to discover how little the agenda of the paper seems to engage with factors one relies on for salary and promotion…
Descriptors: Evaluation Criteria, Data Analysis, Evaluative Thinking, Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Killeen, Peter R. – Psychological Methods, 2010
Lecoutre, Lecoutre, and Poitevineau (2010) have provided sophisticated grounding for "p[subscript rep]." Computing it precisely appears, fortunately, no more difficult than doing so approximately. Their analysis will help move predictive inference into the mainstream. Iverson, Wagenmakers, and Lee (2010) have also validated…
Descriptors: Replication (Evaluation), Measurement Techniques, Research Design, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Bornmann, Lutz – Measurement: Interdisciplinary Research and Perspectives, 2012
Ruscio, Seaman, D'Oriano, Stremlo, and Mahalchik (this issue) evaluate 22 bibliometric indicators, including conventional measures, like the number of publications, the "h" index, and many "h" index variants. To assess the quality of the indicators, their well-justified criteria encompass conceptual, empirical, and practical…
Descriptors: Foreign Countries, Citation Analysis, Correlation, Meta Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Lecoutre, Bruno; Lecoutre, Marie-Paule; Poitevineau, Jacques – Psychological Methods, 2010
P. R. Killeen's (2005a) probability of replication ("p[subscript rep]") of an experimental result is the fiducial Bayesian predictive probability of finding a same-sign effect in a replication of an experiment. "p[subscript rep]" is now routinely reported in "Psychological Science" and has also begun to appear in…
Descriptors: Research Methodology, Guidelines, Probability, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Serlin, Ronald C. – Psychological Methods, 2010
The sense that replicability is an important aspect of empirical science led Killeen (2005a) to define "p[subscript rep]," the probability that a replication will result in an outcome in the same direction as that found in a current experiment. Since then, several authors have praised and criticized 'p[subscript rep]," culminating…
Descriptors: Epistemology, Effect Size, Replication (Evaluation), Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Cumming, Geoff – Psychological Methods, 2010
This comment offers three descriptions of "p[subscript rep]" that start with a frequentist account of confidence intervals, draw on R. A. Fisher's fiducial argument, and do not make Bayesian assumptions. Links are described among "p[subscript rep]," "p" values, and the probability a confidence interval will capture…
Descriptors: Replication (Evaluation), Measurement Techniques, Research Methodology, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Sinharay, Sandip; Haberman, Shelby J. – Measurement: Interdisciplinary Research and Perspectives, 2009
In this commentary, the authors discuss some of the issues regarding the use of diagnostic classification models that practitioners should keep in mind. In the authors experience, these issues are not as well known as they should be. The authors then provide recommendations on diagnostic scoring.
Descriptors: Scoring, Reliability, Validity, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Tummons, Jonathan – Assessment & Evaluation in Higher Education, 2010
This paper forms part of an exploration of assessment on one part-time higher education (HE) course: an in-service, professional qualification for teachers and trainers in the learning and skills sector which is delivered on a franchise basis across a network of further education colleges in the north of England. This paper proposes that the…
Descriptors: Foreign Countries, Portfolios (Background Materials), Portfolio Assessment, Validity
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4