NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 632 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Nathan A. Call; Alec M. Bernstein; Matthew J. O'Brien; Kelly M. Schieltz; Loukia Tsami; Dorothea C. Lerman; Wendy K. Berg; Scott D. Lindgren; Mark A. Connelly; David P. Wacker – Journal of Applied Behavior Analysis, 2024
Clinicians report primarily using functional behavioral assessment (FBA) methods that do not include functional analyses. However, studies examining the correspondence between functional analyses and other types of FBAs have produced inconsistent results. In addition, although functional analyses are considered the gold standard, their…
Descriptors: Functional Behavioral Assessment, Evaluation Methods, Young Children, Autism Spectrum Disorders
Peer reviewed Peer reviewed
Direct linkDirect link
Bartholomew, Scott R.; Mentzer, Nathan; Jones, Matthew; Sherman, Derek; Baniya, Sweta – International Journal of Technology and Design Education, 2022
Traditional efforts around improving assessment often center on the teacher as the evaluator of work rather than the students. These assessment efforts typically focus on measuring learning rather than stimulating, promoting, or producing learning in students. This paper summarizes a study of a large sample of undergraduate students (n = 550) in…
Descriptors: Undergraduate Students, Evaluation Methods, Learning, Comparative Analysis
Natalie D. Jones – ProQuest LLC, 2024
The use of Mixed Methods (MM) in evaluation has increasingly become a valuable asset in evaluators' methodological toolbox. Despite prolific research on various aspects of mixed methods, from typologies to ontological debates, there remains a dearth of empirically derived evidence on effectively communicating MM findings. This lack of empirical…
Descriptors: Evaluation Methods, Communication Strategies, Visual Aids, Evaluation Research
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Susan T. Guynn; James H. Blake; Nathan Nemire; Joe Bible – Journal of Extension, 2024
The South Carolina Master Naturalist Program provides nature-based education to citizen volunteers who will promote environmental stewardship and is offered at six host sites across the state. We conducted a mixed-methods evaluation (the integration of qualitative and quantitative data) of the South Carolina Master Naturalist Program. Overall, the…
Descriptors: Program Evaluation, Evaluation Methods, Natural Sciences, Science Education
Peer reviewed Peer reviewed
Direct linkDirect link
Badham, Louise; Furlong, Antony – International Journal of Testing, 2023
Multilingual summative assessments face significant challenges due to tensions that exist between multiple language provision and comparability. Yet, conventional approaches for investigating comparability in multilingual assessments fail to accommodate assessments that comprise extended responses that target complex constructs. This article…
Descriptors: Summative Evaluation, Multilingualism, Comparative Analysis, Literature
Peer reviewed Peer reviewed
Direct linkDirect link
Moulton, Shawn R.; Peck, Laura R.; Greeney, Adam – American Journal of Evaluation, 2018
In experimental evaluations of health and social programs, the role of dosage is rarely explored because researchers cannot usually randomize individuals to experience varying dosage levels. Instead, such evaluations reveal the average effects of exposure to an intervention, although program exposure may vary widely. This article compares three…
Descriptors: Marriage, Intervention, Prediction, Program Effectiveness
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Romeo, Marina; Yepes-Baldó, Montserrat; González, Vicenta; Burset, Silvia; Martín, Carolina; Bosch, Emma – International Journal of Instruction, 2022
The assessment process in higher education considers four aspects: assessment agents, procedure, content, and scoring. In this study, we delve into the who. We analyze the role of transversal competence assessment agents in the framework of professional internships in university master's degree programs, comparing the suitability of their…
Descriptors: Internship Programs, Higher Education, Evaluators, Masters Programs
Peer reviewed Peer reviewed
Direct linkDirect link
Marine Simon; Alexandra Budke – Journal of Geography in Higher Education, 2024
Comparison is an important geographic method and a common task in geography education. Mastering comparison is a complex competency and written comparisons are challenging tasks both for students and assessors. As yet, however, there is no set test for evaluating comparison competency nor tool for enhancing it. Moreover, little is known about…
Descriptors: Geography Instruction, Student Evaluation, Comparative Analysis, Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
John Bowser; Amy Bellmore; Jim Larson – International Journal of Bullying Prevention, 2020
The Wisconsin School Violence and Bullying Prevention Study, funded by the National Institutes of Justice (NIJ), was a two-year case-control study in 24 Wisconsin middle schools (11 experimental; 13 control) seeking to understand the impact of a comprehensive bullying prevention program on bullying victimization rates. Participating schools'…
Descriptors: Prevention, Bullying, Middle School Students, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Tucker, Susan; Stevahn, Laurie; King, Jean A. – American Journal of Evaluation, 2023
This article compares the purposes and content of the four foundational documents of the American Evaluation Association (AEA): the Program Evaluation Standards, the AEA Public Statement on Cultural Competence in Evaluation, the AEA Evaluator Competencies, and the AEA Guiding Principles. This reflection on alignment is an early effort in the third…
Descriptors: Professionalism, Comparative Analysis, Professional Associations, Program Evaluation
Wong, Vivian C.; Steiner, Peter M.; Anglin, Kylie L. – Grantee Submission, 2018
Given the widespread use of non-experimental (NE) methods for assessing program impacts, there is a strong need to know whether NE approaches yield causally valid results in field settings. In within-study comparison (WSC) designs, the researcher compares treatment effects from an NE with those obtained from a randomized experiment that shares the…
Descriptors: Evaluation Methods, Program Evaluation, Program Effectiveness, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Lioutas, Evagelos D.; Charatsari, Chrysanthi; Cernic Istenic, Majda; La Rocca, Giuseppe; De Rosa, Marcello – Journal of Agricultural Education and Extension, 2019
Purpose: Considering current debates on ecosystem services' effectiveness and the AIS/AKIS functioning, in this study we suggest a new, systemic way to evaluate extension systems (ESs). Using this model, we compared the effectiveness of ESs in three countries with essential differences but also characteristic similarities in their agricultural…
Descriptors: Cross Cultural Studies, Rural Extension, Foreign Countries, Agricultural Occupations
Peer reviewed Peer reviewed
Direct linkDirect link
Paz-Ybarnegaray, Rodrigo; Douthwaite, Boru – American Journal of Evaluation, 2017
This article describes the development and use of a rapid evaluation approach to meet program accountability and learning requirements in a research for development program operating in five developing countries. The method identifies clusters of outcomes, both expected and unexpected, happening within areas of change. In a workshop, change agents…
Descriptors: Evaluation Methods, Program Evaluation, Accountability, Developing Nations
Sophie Litschwartz; Dan Cullinan; Colin Hill – Center for the Analysis of Postsecondary Readiness, 2024
Historically, colleges have used standardized testing to determine whether a student is ready for college-level work or requires developmental courses first, but this method has been criticized as inaccurate. To obtain more accurate placements, nearly three-quarters of colleges now use multiple measures assessment (MMA) systems. These systems…
Descriptors: Student Placement, Evaluation Methods, Alternative Assessment, Standardized Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hamutoglu, Nazire Burçin; Güngören, Özlem Canan; Duman, Ibrahim; Horzum, Mehmet Baris; Kiyici, Mubin; Akgün, Özcan Erkan – International Online Journal of Education and Teaching, 2019
The purpose of this study is to develop three scales to assess some features of a web-based monitoring and support system for internship processes of trainees which was named Monitoring and Support System (SIDES). These scales are: "SIDES Satisfaction Scale", "SIDES Acceptance Scale", and "SIDES Usability Scale". We…
Descriptors: Usability, Internship Programs, Information Systems, Specialists
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  43