NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)2
Since 2006 (last 20 years)18
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations1
What Works Clearinghouse Rating
Showing 1 to 15 of 20 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2017
The What Works Clearinghouse (WWC) evaluates research studies that look at the effectiveness of education programs, products, practices, and policies, which the WWC calls "interventions." Many studies of education interventions make claims about impacts on students' outcomes. Some studies have designs that enable readers to make causal…
Descriptors: Program Design, Program Development, Program Effectiveness, Program Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Richer, Amanda; Charmaraman, Linda; Ceder, Ineke – Afterschool Matters, 2018
Like instruments used in afterschool programs to assess children's social and emotional growth or to evaluate staff members' performance, instruments used to evaluate program quality should be free from bias. Practitioners and researchers alike want to know that assessment instruments, whatever their type or intent, treat all people fairly and do…
Descriptors: Cultural Differences, Social Bias, Interrater Reliability, Program Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Farid, Alem – Electronic Journal of e-Learning, 2014
Although there are tools to assess student's readiness in an "online learning context," little is known about the "psychometric" properties of the tools used or not. A systematic review of 5107 published and unpublished papers identified in a literature search on student online readiness assessment tools between 1990 and…
Descriptors: Online Courses, Electronic Learning, Learning Readiness, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Elbeck, Matt; Bacon, Don – Journal of Education for Business, 2015
The absence of universally accepted definitions for direct and indirect assessment motivates the purpose of this article: to offer definitions that are literature-based and theoretically driven, meeting K. Lewin's (1945) dictum that, "There is nothing so practical as a good theory" (p. 129). The authors synthesize the literature to…
Descriptors: Definitions, Evaluation Methods, Global Approach, Evidence
Peer reviewed Peer reviewed
Direct linkDirect link
Ho, Andrew D. – Teachers College Record, 2014
Background/Context: The target of assessment validation is not an assessment but the use of an assessment for a purpose. Although the validation literature often provides examples of assessment purposes, comprehensive reviews of these purposes are rare. Additionally, assessment purposes posed for validation are generally described as discrete and…
Descriptors: Elementary Secondary Education, Standardized Tests, Measurement Objectives, Educational Change
Peer reviewed Peer reviewed
Direct linkDirect link
Royal, Kenneth D.; Gilliland, Kurt O.; Kernick, Edward T. – Anatomical Sciences Education, 2014
Any examination that involves moderate to high stakes implications for examinees should be psychometrically sound and legally defensible. Currently, there are two broad and competing families of test theories that are used to score examination data. The majority of instructors outside the high-stakes testing arena rely on classical test theory…
Descriptors: Item Response Theory, Scoring, Evaluation Methods, Anatomy
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M.; Davey, Tim; Bridgeman, Brent – ETS Research Report Series, 2012
Automated scoring models for the "e-rater"® scoring engine were built and evaluated for the "GRE"® argument and issue-writing tasks. Prompt-specific, generic, and generic with prompt-specific intercept scoring models were built and evaluation statistics such as weighted kappas, Pearson correlations, standardized difference in…
Descriptors: Scoring, Test Scoring Machines, Automation, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Baartman, Liesbeth K. J.; Prins, Frans J.; Kirschner, Paul A.; van der Vleuten, Cees P. M. – Evaluation and Program Planning, 2011
The goal of this article is to contribute to the validation of a self-evaluation method, which can be used by schools to evaluate the quality of their Competence Assessment Program (CAP). The outcomes of the self-evaluations of two schools are systematically compared: a novice school with little experience in competence-based education and…
Descriptors: Educational Innovation, Competency Based Education, Self Evaluation (Groups), Program Validation
Porter, Jennifer Marie – ProQuest LLC, 2010
This research evaluated the inter-rater reliability of the Performance Assessment for California Teachers (PACT). Multiple methods for estimating overall rater consistency include percent agreement and Cohen's Kappa (1960), which yielded discrepancies between rater agreement in terms of whether candidates passed or failed particular PACT rubrics.…
Descriptors: Interrater Reliability, Program Effectiveness, Scoring Rubrics, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Gilbreath, Brad; Rose, Gail L.; Dietrich, Kim E. – Mentoring & Tutoring: Partnership in Learning, 2008
The purpose of this article is to inform readers about the types of instruments available for assessing and improving mentoring in organizations. Extensive review of the psychological, business and medical literature was conducted to identify commercially published, practitioner-oriented instruments. All of the instruments that were…
Descriptors: Mentors, Psychometrics, Literature Reviews, Evaluation Methods
Heh, Peter – ProQuest LLC, 2009
The current study examined the validation and alignment of the PASA-Science by determining whether the alternate science assessment anchors linked to the regular education science anchors; whether the PASA-Science assessment items are science; whether the PASA-Science assessment items linked to the alternate science eligible content, and what…
Descriptors: Program Effectiveness, Special Education, Science Education, Science Tests
Peer reviewed Peer reviewed
Direct linkDirect link
van der Knaap, Leontien M.; Leeuw, Frans L.; Bogaerts, Stefan; Nijssen, Laura T. J. – American Journal of Evaluation, 2008
This article presents an approach to systematic reviews that combines the Campbell Collaboration Crime and Justice standards and the realist notion of contexts-mechanisms-outcomes (CMO) configurations. Both approaches have their advantages and drawbacks, and the authors will make a case for combining both approaches to profit from their advantages…
Descriptors: Research Methodology, Evaluation Methods, Criminology, Evaluation Research
Peer reviewed Peer reviewed
Direct linkDirect link
Coryn, Chris L. S. – Journal of MultiDisciplinary Evaluation, 2007
The author discusses validation hierarchies grounded in the tradition of quantitative research that generally consists of the criteria of validity, reliability and objectivity and compares this with similar criteria developed by the qualitative tradition, described as trustworthiness, dependability and confirmability. Although these quantitative…
Descriptors: Research Methodology, Statistical Analysis, Value Judgment, Qualitative Research
Peer reviewed Peer reviewed
Direct linkDirect link
Keating, Daniel P. – Early Education and Development, 2007
This article is a commentary for the special issue on the Early Development Instrument (EDI), a community tool to assess children's school readiness and developmental outcomes at a group level. The EDI is administered by kindergarten teachers, who assess their kindergarten students on 5 developmental domains: physical health and well-being, social…
Descriptors: School Readiness, Formative Evaluation, Kindergarten, Cognitive Development
Peer reviewed Peer reviewed
Direct linkDirect link
Febey, Karen; Coyne, Molly – American Journal of Evaluation, 2007
The field of program evaluation lacks interactive teaching tools. To address this pedagogical issue, the authors developed a collaborative learning technique called Program Evaluation: The Board Game. The authors present the game and its development in this practitioner-oriented article. The evaluation board game is an adaptable teaching tool…
Descriptors: Teaching Methods, Program Evaluation, Evaluation Methods, Cooperative Learning
Previous Page | Next Page »
Pages: 1  |  2