NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 76 to 90 of 3,206 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Carniel, Jessica; Hickey, Andrew; Southey, Kim; Brömdal, Annette; Crowley-Cyr, Lynda; Eacersall, Douglas; Farmer, Will; Gehrmann, Richard; Machin, Tanya; Pillay, Yosheen – Research Ethics, 2023
Ethics review processes are frequently perceived as extending from codes and protocols rooted in biomedical disciplines. As a result, many researchers in the humanities and social sciences (HASS) find these processes to be misaligned, if not outrightly obstructive to their research. This leads some scholars to advocate against HASS participation…
Descriptors: Ethics, Humanities, Social Sciences, Research
Peer reviewed Peer reviewed
Direct linkDirect link
Darvishi, Ali; Khosravi, Hassan; Rahimi, Afshin; Sadiq, Shazia; Gasevic, Dragan – IEEE Transactions on Learning Technologies, 2023
Engaging students in creating learning resources has demonstrated pedagogical benefits. However, to effectively utilize a repository of student-generated content (SGC), a selection process is needed to separate high- from low-quality resources as some of the resources created by students can be ineffective, inappropriate, or incorrect. A common…
Descriptors: Student Developed Materials, Educational Assessment, Peer Evaluation, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Montrosse-Moorhead, Bianca; Gambino, Anthony J.; Yahn, Laura M.; Fan, Mindy; Vo, Anne T. – American Journal of Evaluation, 2022
A budding area of research is devoted to studying evaluator curriculum, yet to date, it has focused exclusively on describing the content and emphasis of topics or competencies in university-based programs. This study aims to expand the foci of research efforts and investigates the extent to which evaluators agree on what competencies should guide…
Descriptors: Masters Programs, Doctoral Programs, Competence, Competency Based Education
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Fisne, Fatima Nur; Sata, Mehmet; Karakaya, Ismail – International Journal of Assessment Tools in Education, 2022
Performance standards have important consequences for all the stakeholders in the assessment of L2 academic writing. These standards not only describe the level of writing performance but also provide a basis for making evaluative decisions on the academic writing. Such a high-stakes role of the performance standards requires the enhancement of…
Descriptors: Standard Setting, Writing Evaluation, Academic Language, English (Second Language)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Prendergast, Caroline O.; Baker, Gianina; Henning, Gavin; Kahn, Susan; McConnell, Kate; Stitt-Bergh, Monica; Townsend, Linda – Research & Practice in Assessment, 2022
Assessment practitioners in higher education follow a variety of paths to their roles. Diverse preparation supports creative problem-solving in a changing educational landscape. However, it can also lead to inconsistency in language, preparation, and background knowledge. Further, the chasms between assessment practitioners' paths can lead to…
Descriptors: Intellectual Disciplines, Professional Identity, Higher Education, Role
Lambert, Richard G.; Holcomb, T. Scott; Bottoms, Bryndle – Center for Educational Measurement and Evaluation, 2022
The validity of the Kappa coefficient of chance-corrected agreement has been questioned when the prevalence of specific rating scale categories is low and agreement between raters is high. The researchers proposed the Lambda Coefficient of Rater-Mediated Agreement as an alternative to Kappa to address these concerns. Lambda corrects for chance…
Descriptors: Interrater Reliability, Evaluators, Rating Scales, Teacher Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Ciji A. Heiser; Julene L. Jones; Glenn Allen Phillips – Assessment Update, 2024
Institutions of higher education are asked to consider how their work can advance equity in institutional outcomes. Assessment, too, has been asked to consider the ways in which traditional student assessment "privileges and validates certain types of learning and evidence of learning over others" (Montenegro and Jankowski 2017, p. 5).…
Descriptors: Higher Education, Equal Education, Educational Assessment, Professional Development
Peer reviewed Peer reviewed
Direct linkDirect link
Laura Schildt; Bart Deygers; Albert Weideman – Language Testing, 2024
In the context of policy-driven language testing for citizenship, a growing body of research examines the political justifications and ethical implications of language requirements and test use. However, virtually no studies have looked at the role that language testers play in the evolution of language requirements. Critical gaps remain in our…
Descriptors: Language Tests, Citizenship, Educational Policy, Assessment Literacy
Peer reviewed Peer reviewed
Direct linkDirect link
Kelly Edwards; James Soland – Educational Assessment, 2024
Classroom observational protocols, in which raters observe and score the quality of teachers' instructional practices, are often used to evaluate teachers for consequential purposes despite evidence that scores from such protocols are frequently driven by factors, such as rater and temporal effects, that have little to do with teacher quality. In…
Descriptors: Classroom Observation Techniques, Teacher Evaluation, Accuracy, Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Svihla, Vanessa; Gallup, Amber – Practical Assessment, Research & Evaluation, 2021
In making validity arguments, a central consideration is whether the instrument fairly and adequately covers intended content, and this is often evaluated by experts. While common procedures exist for quantitatively assessing this, the effect of loss aversion--a cognitive bias that would predict a tendency to retain items--on these procedures has…
Descriptors: Content Validity, Anxiety, Bias, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Humphrey-Murto, Susan; Shaw, Tammy; Touchie, Claire; Pugh, Debra; Cowley, Lindsay; Wood, Timothy J. – Advances in Health Sciences Education, 2021
Understanding which factors can impact rater judgments in assessments is important to ensure quality ratings. One such factor is whether prior performance information (PPI) about learners influences subsequent decision making. The information can be acquired directly, when the rater sees the same learner, or different learners over multiple…
Descriptors: Influences, Evaluators, Value Judgment, Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Harris, Kevin; Oatley, Chad; Mumford, Steven; Pham, Phung K.; Nunns, Heather – American Journal of Evaluation, 2021
This method note presents Q methodology as a useful tool for evaluators to add to their practice toolbox. Q methodology, which involves both quantitative and qualitative techniques, can help researchers and evaluators systematically understand subjectivity and the communicability of opinions and perspectives. We first provide an overview of Q…
Descriptors: Q Methodology, Program Evaluation, Foreign Countries, Stakeholders
Peer reviewed Peer reviewed
Direct linkDirect link
Westine, Carl; Li, Zhi – AERA Online Paper Repository, 2021
Intentional synthesis of research findings is necessary to inform practice, particularly within the research on evaluation (RoE) literature. This study expands upon the work of Coryn et al. (2017) to synthesize the RoE studies pertaining to the domain of evaluation context. Findings from this study demonstrate that organization and program's size,…
Descriptors: Evaluation Research, Institutional Characteristics, Evaluation, Evaluators
Peer reviewed Peer reviewed
Direct linkDirect link
Elizabeth L. Wetzler; Kenneth S. Cassidy; Margaret J. Jones; Chelsea R. Frazier; Nickalous A. Korbut; Chelsea M. Sims; Shari S. Bowen; Michael Wood – Teaching of Psychology, 2025
Background: Generative artificial intelligence (AI) represents a potentially powerful, time-saving tool for grading student essays. However, little is known about how AI-generated essay scores compare to human instructor scores. Objective: The purpose of this study was to compare the essay grading scores produced by AI with those of human…
Descriptors: Essays, Writing Evaluation, Scores, Evaluators
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Jue; Engelhard, George; Combs, Trenton – Journal of Experimental Education, 2023
Unfolding models are frequently used to develop scales for measuring attitudes. Recently, unfolding models have been applied to examine rater severity and accuracy within the context of rater-mediated assessments. One of the problems in applying unfolding models to rater-mediated assessments is that the substantive interpretations of the latent…
Descriptors: Writing Evaluation, Scoring, Accuracy, Computational Linguistics
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  214