NotesFAQContact Us
Collection
Advanced
Search Tips
Education Level
Higher Education14
Postsecondary Education14
High Schools2
Secondary Education2
Grade 101
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 14 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Erguvan, Inan Deniz; Aksu Dunya, Beyza – Language Testing in Asia, 2020
This study examined the rater severity of instructors using a multi-trait rubric in a freshman composition course offered in a private university in Kuwait. Use of standardized multi-trait rubrics is a recent development in this course and student feedback and anchor papers provided by instructors for each essay exam necessitated the assessment of…
Descriptors: Foreign Countries, College Freshmen, Freshman Composition, Writing Evaluation
Michelle Herridge – ProQuest LLC, 2021
Evaluation of student written work during summative assessments is an important and critical task for instructors at all educational levels. Nevertheless, few research studies exist that provide insights into how different instructors approach this task. Chemistry faculty (FIs) and graduate student instructors (GSIs) regularly engage in the…
Descriptors: Science Instruction, Chemistry, College Faculty, Teaching Assistants
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Finn, Bridgid; Wendler, Cathy; Ricker-Pedley, Kathryn L.; Arslan, Burcu – ETS Research Report Series, 2018
This report investigates whether the time between scoring sessions has an influence on operational and nonoperational scoring accuracy. The study evaluates raters' scoring accuracy on constructed-response essay responses for the "GRE"® General Test. Binomial linear mixed-effect models are presented that evaluate how the effect of various…
Descriptors: Intervals, Scoring, Accuracy, Essay Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Wu, Siew Mei; Tan, Susan – Higher Education Research and Development, 2016
Rating essays is a complex task where students' grades could be adversely affected by test-irrelevant factors such as rater characteristics and rating scales. Understanding these factors and controlling their effects are crucial for test validity. Rater behaviour has been extensively studied through qualitative methods such as questionnaires and…
Descriptors: Scoring, Item Response Theory, Student Placement, College Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zhang, Mo – ETS Research Report Series, 2013
Many testing programs use automated scoring to grade essays. One issue in automated essay scoring that has not been examined adequately is population invariance and its causes. The primary purpose of this study was to investigate the impact of sampling in model calibration on population invariance of automated scores. This study analyzed scores…
Descriptors: Automation, Scoring, Essay Tests, Sampling
Peer reviewed Peer reviewed
Direct linkDirect link
Attali, Yigal; Lewis, Will; Steier, Michael – Language Testing, 2013
Automated essay scoring can produce reliable scores that are highly correlated with human scores, but is limited in its evaluation of content and other higher-order aspects of writing. The increased use of automated essay scoring in high-stakes testing underscores the need for human scoring that is focused on higher-order aspects of writing. This…
Descriptors: Scoring, Essay Tests, Reliability, High Stakes Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Hale, Chris C. – Language Testing in Asia, 2015
Student self-assessment has been heralded as a way of increasing student ownership of the learning process, enhancing metacognative awareness of their learning progress as well as promoting learner autonomy. In a university setting, where a major aim is to promote critical thinking and attentiveness to one's responsibility in an academic…
Descriptors: Self Evaluation (Individuals), Learning Processes, Metacognition, Personal Autonomy
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kayapinar, Ulas – Eurasian Journal of Educational Research, 2014
Problem Statement: There have been many attempts to research the effective assessment of writing ability, and many proposals for how this might be done. In this sense, rater reliability plays a crucial role for making vital decisions about testees in different turning points of both educational and professional life. Intra-rater and inter-rater…
Descriptors: Interrater Reliability, Essay Tests, Writing Tests, Grading
Scharf, Davida – ProQuest LLC, 2013
Purpose: The goal of the study was to test an intervention using a brief essay as an instrument for evaluating higher-order information literacy skills in college students, while accounting for prior conditions such as socioeconomic status and prior academic achievement, and identify other predictors of information literacy through an evaluation…
Descriptors: Information Literacy, Intervention, Student Evaluation, College Students
Peer reviewed Peer reviewed
Direct linkDirect link
Cubilo, Justin; Winke, Paula – Language Assessment Quarterly, 2013
Researchers debate whether listening tasks should be supported by visuals. Most empirical research in this area has been conducted on the effects of visual support on listening comprehension tasks employing multiple-choice questions. The present study seeks to expand this research by investigating the effects of video listening passages (vs.…
Descriptors: Listening Comprehension Tests, Visual Stimuli, Writing Tests, Video Technology
Holifield-Scott, April – ProQuest LLC, 2011
A study was conducted to determine the extent to which high school and college/university Advanced Placement English Language and Composition readers value and implement the curricular requirements of Advanced Placement English Language and Composition. The participants were 158 readers of the 2010 Advanced Placement English Language and…
Descriptors: Advanced Placement, English Instruction, Writing (Composition), English Curriculum
Peer reviewed Peer reviewed
Direct linkDirect link
Mogey, Nora; Paterson, Jessie; Burk, John; Purcell, Michael – ALT-J: Research in Learning Technology, 2010
Students at the University of Edinburgh do almost all their work on computers, but at the end of the semester they are examined by handwritten essays. Intuitively it would be appealing to allow students the choice of handwriting or typing, but this raises a concern that perhaps this might not be "fair"--that the choice a student makes,…
Descriptors: Handwriting, Essay Tests, Interrater Reliability, Grading
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rudner, Lawrence M.; Garcia, Veronica; Welch, Catherine – Journal of Technology, Learning, and Assessment, 2006
This report provides a two-part evaluation of the IntelliMetric[SM] automated essay scoring system based on its performance scoring essays from the Analytic Writing Assessment of the Graduate Management Admission Test[TM] (GMAT[TM]). The IntelliMetric system performance is first compared to that of individual human raters, a Bayesian system…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays
Peer reviewed Peer reviewed
Direct linkDirect link
Clariana, Roy B.; Wallace, Patricia – Journal of Educational Computing Research, 2007
This proof-of-concept investigation describes a computer-based approach for deriving the knowledge structure of individuals and of groups from their written essays, and considers the convergent criterion-related validity of the computer-based scores relative to human rater essay scores and multiple-choice test scores. After completing a…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Construct Validity, Cognitive Structures