NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 14 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Feinstein, Osvaldo – American Journal of Evaluation, 2023
"Integrative evaluation" is an approach with two main phases: identification of plausible rival hypotheses and integration of rival hypotheses. The first phase may correspond to traditional adversary evaluation, whereas the second phase, that is not included in adversary evaluation, requires integrative thinking which can be applied when…
Descriptors: Evaluation, Integrated Activities, Intervention, Evaluators
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bosch, Nigel – Journal of Educational Data Mining, 2021
Automatic machine learning (AutoML) methods automate the time-consuming, feature-engineering process so that researchers produce accurate student models more quickly and easily. In this paper, we compare two AutoML feature engineering methods in the context of the National Assessment of Educational Progress (NAEP) data mining competition. The…
Descriptors: Accuracy, Learning Analytics, Models, National Competency Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Desombre, Caroline; Anegmar, Souad; Delelis, Gérald – European Journal of Psychology of Education, 2018
This study investigated the hypothesis that cognitive performance of students with physical disabilities may be influenced by the evaluators' identity. Students with or without a physical disability completed a logic test and were informed that they would be evaluated by students from their own group (ingroup condition) or from an other group…
Descriptors: Stereotypes, Hypothesis Testing, Cognitive Ability, Physical Disabilities
Hicks, Tyler; Rodríguez-Campos, Liliana; Choi, Jeong Hoon – American Journal of Evaluation, 2018
To begin statistical analysis, Bayesians quantify their confidence in modeling hypotheses with priors. A prior describes the probability of a certain modeling hypothesis apart from the data. Bayesians should be able to defend their choice of prior to a skeptical audience. Collaboration between evaluators and stakeholders could make their choices…
Descriptors: Bayesian Statistics, Evaluation Methods, Statistical Analysis, Hypothesis Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Krawczyk, Michal – Assessment & Evaluation in Higher Education, 2018
In this study, data on grades awarded for bachelor and master theses at a large Polish university were used to identify possible discrimination on gender or physical attractiveness. The focus is on the gap between the grades awarded by the advisor (who knows the student personally) and the referee (who typically does not, so that gender is less…
Descriptors: Gender Differences, Grades (Scholastic), Correlation, College Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Boulay, Beth; Martin, Carlos; Zief, Susan; Granger, Robert – Society for Research on Educational Effectiveness, 2013
Contradictory findings from "well-implemented" rigorous evaluations invite researchers to identify the differences that might explain the contradictions, helping to generate testable hypotheses for new research. This panel will examine efforts to ensure that the large number of local evaluations being conducted as part of four…
Descriptors: Program Evaluation, Evaluation Methods, Research, Evaluators
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Heather A. – Journal of Research on Christian Education, 2015
If Christian schools desire students to achieve higher-level thinking, then the textbooks that teachers use should reflect such thinking. Using Risner's (1987) methodology, raters classified questions from two Christian publishers' fifth grade reading textbooks based on the revised Bloom's taxonomy (Anderson et al., 2001). The questions in the A…
Descriptors: Religious Education, Christianity, Textbooks, Thinking Skills
Peer reviewed Peer reviewed
Direct linkDirect link
Rusticus, Shayna A.; Lovato, Chris Y. – Practical Assessment, Research & Evaluation, 2011
Assessing the comparability of different groups is an issue facing many researchers and evaluators in a variety of settings. Commonly, null hypothesis significance testing (NHST) is incorrectly used to demonstrate comparability when a non-significant result is found. This is problematic because a failure to find a difference between groups is not…
Descriptors: Medical Education, Evaluators, Intervals, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Geluso, Joe – Computer Assisted Language Learning, 2013
Usage-based theories of language learning suggest that native speakers of a language are acutely aware of formulaic language due in large part to frequency effects. Corpora and data-driven learning can offer useful insights into frequent patterns of naturally occurring language to second/foreign language learners who, unlike native speakers, are…
Descriptors: Native Speakers, English (Second Language), Search Engines, Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Broughton, Mary; Stevens, Catherine – Psychology of Music, 2009
The experiment reported in this article investigated the assumption that visual movement plays a role in musician-to-audience communication in marimba performance. Body movement is of particular relevance here as the expressive capabilities of the marimba are relatively restricted, and the movements required to play it are visible. Twenty-four…
Descriptors: Audience Awareness, Nonverbal Communication, Audiences, Musicians
Peer reviewed Peer reviewed
Riedesel, Paul L.; Blocker, Jean T. – Sociology and Social Research, 1978
Hypothesizing that prejudiced feelings about minorities are due not only to racial differences but to the fact that minorities often have lower socio-economic rank as well, this study examines data from a household survey in Tulsa, Oklahoma and documents the greater salience of socioeconomic cues and the lesser salience of racial cues for social…
Descriptors: Differences, Evaluators, Hypothesis Testing, Minority Groups
Peer reviewed Peer reviewed
Perloff, Richard M.; And Others – New Directions for Program Evaluation, 1980
Causes of evaluator bias are: overemphasizing concrete, salient, and retrievable information; reporting only evidence which confirms hypothesis; focusing on stable personality factors, rather than on situation and environment; developing positive perceptions of a program as both an evaluator and a highly involved participant; statistical naivete;…
Descriptors: Bias, Cognitive Processes, Evaluative Thinking, Evaluators
Sharp, D. E. Ann; And Others – 1985
Hypothesis generation and testing is outlined as an additional domain for program evaluators. Program evaluation involves a thorough analysis of the processes that contribute to change (or a lack of change) among program recipients. This process of change is analyzed in two ways: (1) treating programs as naturally occurring field studies; and (2)…
Descriptors: Adults, Data Analysis, Data Interpretation, Evaluators
Peer reviewed Peer reviewed
Campbell, Donald T. – Evaluation Practice, 1994
In spite of many inherent problems, impact evaluation should remain an integral part of program evaluation, both because impact evaluation brings its budget justification with it, and because its focus on causal hypotheses is an essential part of evaluation. Methodology based on the author's body of work is reviewed. (SLD)
Descriptors: Budgeting, Causal Models, Evaluation Methods, Evaluation Problems