NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 3 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Lim, Gad S. – Language Testing, 2011
Raters are central to writing performance assessment, and rater development--training, experience, and expertise--involves a temporal dimension. However, few studies have examined new and experienced raters' rating performance longitudinally over multiple time points. This study uses operational data from the writing section of the MELAB (n =…
Descriptors: Expertise, Writing Evaluation, Performance Based Assessment, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Plough, India C.; Briggs, Sarah L.; Van Bonn, Sarah – Language Testing, 2010
The study reported here examined the evaluation criteria used to assess the proficiency and effectiveness of the language produced in an oral performance test of English conducted in an American university context. Empirical methods were used to analyze qualitatively and quantitatively transcriptions of the Oral English Tests (OET) of 44…
Descriptors: Graduate Students, Listening Comprehension, Evaluators, Performance Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Johnson, Jeff S.; Lim, Gad S. – Language Testing, 2009
Language performance assessments typically require human raters, introducing possible error. In international examinations of English proficiency, rater language background is an especially salient factor that needs to be considered. The existence of rater language background-related bias in writing performance assessment is the object of this…
Descriptors: Performance Based Assessment, Performance Tests, Native Speakers, English (Second Language)