NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 15 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
James, David; Schraw, Gregory; Kuch, Fred – Assessment & Evaluation in Higher Education, 2019
We proposed an extended form of the Govindarajulu and Barnett margin of error (MOE) equation and used it with an analysis of variance experimental design to examine the effects of aggregating student evaluations of teaching (SET) ratings on the MOE statistic. The interpretative validity of SET ratings can be questioned when the number of students…
Descriptors: Student Evaluation of Teacher Performance, Statistical Analysis, Validity, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Park, Eunkyoung; Dooris, John – Assessment & Evaluation in Higher Education, 2020
This study uses decision tree analysis to determine the most important variables that predict high overall teaching and course scores on a student evaluation of teaching (SET) instrument at a large public research university in the United States. Decision tree analysis is a more robust and intuitive approach for analysing and interpreting SET…
Descriptors: Predictor Variables, Student Evaluation of Teacher Performance, Decision Making, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Clayson, Dennis E. – Assessment & Evaluation in Higher Education, 2018
The student evaluation of teaching process is generally thought to produce reliable results. The consistency is found within class and instructor averages, while a considerable amount of inconsistency exists with individual student responses. This paper reviews these issues along with a detailed examination of common measures of reliability that…
Descriptors: Student Evaluation of Teacher Performance, Reliability, Validity, Evaluation Criteria
Peer reviewed Peer reviewed
Direct linkDirect link
Britton, Emily; Simper, Natalie; Leger, Andrew; Stephenson, Jenn – Assessment & Evaluation in Higher Education, 2017
Effective teamwork skills are essential for success in an increasingly team-based workplace. However, research suggests that there is often confusion concerning how teamwork is measured and assessed, making it difficult to develop these skills in undergraduate curricula. The goal of the present study was to develop a sustainable tool for assessing…
Descriptors: Teamwork, Undergraduate Students, Skills, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Vulperhorst, Jonne; Lutz, Christel; de Kleijn, Renske; van Tartwijk, Jan – Assessment & Evaluation in Higher Education, 2018
To refine selective admission models, we investigate which measure of prior achievement has the best predictive validity for academic success in university. We compare the predictive validity of three core high school subjects to the predictive validity of high school grade point average (GPA) for academic achievement in a liberal arts university…
Descriptors: Predictive Validity, Foreign Countries, Grade Point Average, Selective Admission
Peer reviewed Peer reviewed
Direct linkDirect link
Bergsmann, Evelyn; Klug, Julia; Burger, Christoph; Först, Nora; Spiel, Christiane – Assessment & Evaluation in Higher Education, 2018
There is a lively discussion on how to evaluate competence-based higher education in both evaluation and competence research. The instruments used are often limited to course evaluation or specific competences, taking a rather narrow perspective. Furthermore, the instruments often comprise predetermined competences that cannot be adapted to higher…
Descriptors: Questionnaires, Minimum Competency Testing, Screening Tests, Higher Education
Peer reviewed Peer reviewed
Direct linkDirect link
Menéndez-Varela, José-Luis; Gregori-Giralt, Eva – Assessment & Evaluation in Higher Education, 2016
Rubrics have attained considerable importance in the authentic and sustainable assessment paradigm; nevertheless, few studies have examined their contribution to validity, especially outside the domain of educational studies. This empirical study used a quantitative approach to analyse the validity of a rubrics-based performance assessment. Raters…
Descriptors: Scoring Rubrics, Validity, Performance Based Assessment, College Freshmen
Peer reviewed Peer reviewed
Direct linkDirect link
van Ooijen-van der Linden, Linda; van der Smagt, Maarten J.; Woertman, Liesbeth; te Pas, Susan F. – Assessment & Evaluation in Higher Education, 2017
Prediction accuracy of academic achievement for admission purposes requires adequate "sensitivity" and "specificity" of admission tools, yet the available information on the validity and predictive power of admission tools is largely based on studies using correlational and regression statistics. The goal of this study was to…
Descriptors: Bias, Perception, Theories, Admission Criteria
Peer reviewed Peer reviewed
Direct linkDirect link
Simpson, Genevieve; Clifton, Julian – Assessment & Evaluation in Higher Education, 2016
Peer review feedback, developed to assist students with increasing the quality of group reports and developing peer review skills, was added to a master's level Climate Change Policy and Planning unit. A pre- and post-survey was conducted to determine whether students found the process a valuable learning opportunity: 87% of students responding to…
Descriptors: Graduate Students, Plagiarism, Peer Evaluation, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
Jeffery, Daniel; Yankulov, Krassimir; Crerar, Alison; Ritchie, Kerry – Assessment & Evaluation in Higher Education, 2016
The psychometric measures of accuracy, reliability and validity of peer assessment are critical qualities for its use as a supplement to instructor grading. In this study, we seek to determine which factors related to peer review are the most influential on these psychometric measures, with a primary focus on the accuracy of peer assessment or how…
Descriptors: Undergraduate Students, Peer Evaluation, Accuracy, Writing Assignments
Peer reviewed Peer reviewed
Direct linkDirect link
James, David E.; Schraw, Gregory; Kuch, Fred – Assessment & Evaluation in Higher Education, 2015
We present an equation, derived from standard statistical theory, that can be used to estimate sampling margin of error for student evaluations of teaching (SETs). We use the equation to examine the effect of sample size, response rates and sample variability on the estimated sampling margin of error, and present results in four tables that allow…
Descriptors: Student Evaluation of Teacher Performance, Sampling, Validity, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Kandiko Howson, Camille; Buckley, Alex – Assessment & Evaluation in Higher Education, 2017
Student engagement has become a key feature of UK higher education, but until recently there has been a lack of data to track, benchmark and drive enhancement. In 2015 the first full administration ran in the UK a range of survey items drawn from the US-based National Survey of Student Engagement (NSSE). This is the latest example of international…
Descriptors: Foreign Countries, Test Construction, College Freshmen, College Seniors
Peer reviewed Peer reviewed
Direct linkDirect link
Yin, Hongbiao; Wang, Wenyan – Assessment & Evaluation in Higher Education, 2016
Viewing student engagement as a multidimensional construct, this study explored the motivation and engagement of undergraduate students in China. A sample of 1131 students from 10 full-time universities in Beijing participated in a survey. The results showed that the Motivation and Engagement Scale for university/college students is a promising…
Descriptors: Undergraduate Students, Student Motivation, Learner Engagement, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Malau-Aduli, Bunmi S.; Zimitat, Craig – Assessment & Evaluation in Higher Education, 2012
The aim of this study was to assess the effect of the introduction of peer review processes on the quality of multiple-choice examinations in the first three years of an Australian medical course. The impact of the peer review process and overall quality assurance (QA) processes were evaluated by comparing the examination data generated in earlier…
Descriptors: Foreign Countries, Peer Evaluation, Multiple Choice Tests, Test Construction
Peer reviewed Peer reviewed
Burton, Richard F.; Miller, David J. – Assessment & Evaluation in Higher Education, 1999
Discusses statistical procedures for increasing test unreliability due to guessing in multiple choice and true/false tests. Proposes two new measures of test unreliability: one concerned with resolution of defined levels of knowledge and the other with the probability of examinees being incorrectly ranked. Both models are based on the binomial…
Descriptors: Guessing (Tests), Higher Education, Multiple Choice Tests, Objective Tests