NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)1
Since 2006 (last 20 years)6
Audience
Researchers1
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 18 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Bramley, Tom; Vitello, Sylvia – Assessment in Education: Principles, Policy & Practice, 2019
Comparative Judgement (CJ) is an increasingly widely investigated method in assessment for creating a scale, for example of the quality of essays. One area that has attracted attention in CJ studies is the optimisation of the selection of pairs of objects for judgement. One approach is known as adaptive comparative judgement (ACJ). It has been…
Descriptors: Reliability, Evaluation Methods, Comparative Analysis, Essay Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Condon, William – Assessing Writing, 2013
Automated Essay Scoring (AES) has garnered a great deal of attention from the rhetoric and composition/writing studies community since the Educational Testing Service began using e-rater[R] and the "Criterion"[R] Online Writing Evaluation Service as products in scoring writing tests, and most of the responses have been negative. While the…
Descriptors: Measurement, Psychometrics, Evaluation Methods, Educational Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Barkaoui, Khaled – Assessment in Education: Principles, Policy & Practice, 2011
This study examined the effects of marking method and rater experience on ESL (English as a Second Language) essay test scores and rater performance. Each of 31 novice and 29 experienced raters rated a sample of ESL essays both holistically and analytically. Essay scores were analysed using a multi-faceted Rasch model to compare test-takers'…
Descriptors: Writing Evaluation, Writing Tests, Essay Tests, Interrater Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Mogey, Nora; Paterson, Jessie; Burk, John; Purcell, Michael – ALT-J: Research in Learning Technology, 2010
Students at the University of Edinburgh do almost all their work on computers, but at the end of the semester they are examined by handwritten essays. Intuitively it would be appealing to allow students the choice of handwriting or typing, but this raises a concern that perhaps this might not be "fair"--that the choice a student makes,…
Descriptors: Handwriting, Essay Tests, Interrater Reliability, Grading
Peer reviewed Peer reviewed
Direct linkDirect link
Coniam, David – ReCALL, 2009
This paper describes a study of the computer essay-scoring program BETSY. While the use of computers in rating written scripts has been criticised in some quarters for lacking transparency or lack of fit with how human raters rate written scripts, a number of essay rating programs are available commercially, many of which claim to offer comparable…
Descriptors: Writing Tests, Scoring, Foreign Countries, Interrater Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Koul, Ravinder; Clariana, Roy B.; Salehi, Roya – Journal of Educational Computing Research, 2005
This article reports the results of an investigation of the convergent criterion-related validity of two computer-based tools for scoring concept maps and essays as part of the ongoing formative evaluation of these tools. In pairs, participants researched a science topic online and created a concept map of the topic. Later, participants…
Descriptors: Scoring, Essay Tests, Test Validity, Formative Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Sundberg, Sara Brooks – History Teacher, 2006
This paper explores whether or not the simple addition of essay questions in examinations increased the learning of the sort normally tested by objective questions alone. Thirteen sections of a "United States History to 1877" class comprised the study group. The experimental group, consisting of nine sections, wrote essay questions on…
Descriptors: Experimental Groups, Scores, Control Groups, United States History
Peer reviewed Peer reviewed
Feletti, G. I.; Gillies, A. H. B. – Journal of Medical Education, 1982
Two methods of assessing medical students' problem-solving skills were compared in actual use: a modified essay question and a structured oral examination based on it. The reliability and validity of each were found to be similar. (MSE)
Descriptors: Clinical Diagnosis, Comparative Analysis, Essay Tests, Evaluation Methods
Kemerer, Richard; Wahlstrom, Merlin – Performance and Instruction, 1985
Compares the features, learning outcomes tested, reliability, viability, and cost effectiveness of essay tests with those of interpretive tests used in training programs. A case study illustrating how an essay test was converted to an interpretive test and pilot tested is included to illustrate the advantages of interpretive testing. (MBR)
Descriptors: Case Studies, Comparative Analysis, Cost Effectiveness, Essay Tests
Peer reviewed Peer reviewed
Bamberg, Betty – College Composition and Communication, 1982
Compares the efficacy of objective and holistic writing evaluation, in the context of the freshman writing program evaluation procedure at a west coast university. Indicates that holistic evaluation of writing samples is a far more accurate measure of writing quality and potential. (HTH)
Descriptors: Comparative Analysis, Essay Tests, Evaluation Methods, Higher Education
Peer reviewed Peer reviewed
Culpepper, Marilyn Mayer; Ramsdell, Rae – Research in the Teaching of English, 1982
The test scores of college freshmen given both a multiple choice test and an essay test of writing skills were compared to assess the validity of a multiple choice test compared with an essay test. (HOD)
Descriptors: Comparative Analysis, Essay Tests, Evaluation Methods, Higher Education
Peer reviewed Peer reviewed
Orpen, Christopher – Higher Education, 1982
Term papers written by 42 students in two subjects were assessed by their classmates and by lecturers in the relevant subjects. No difference occurred between lecturers and students in average marks, variations in marks, agreement in marks (reliability), or relationship between marks and performance in final examinations (validity). (Author/MSE)
Descriptors: College Faculty, Comparative Analysis, Essay Tests, Evaluation Methods
Peer reviewed Peer reviewed
Ackerman, Terry A.; Smith, Philip L. – Applied Psychological Measurement, 1988
The similarity of information provided by direct and indirect methods of writing assessment was investigated using 219 tenth graders. A resulting cognitive model of writing skills indicates that practitioners interested in reliably measuring all aspects of the proposed writing process continuum use both direct and indirect methods. (TJH)
Descriptors: Comparative Analysis, Essay Tests, Evaluation Methods, Factor Analysis
Peer reviewed Peer reviewed
Hogg, Peter; Boyle, Tom; Lawson, Richard – Journal of Educational Multimedia and Hypermedia, 1999
Reports on a comparative assessment of a multimedia learning environment based on a guided discovery approach called CORE (Concept Object Refinement Expression) with two control conditions, lecture and electronic book, in an undergraduate radiography course. Discusses results of qualitative and quantitative measures of effectiveness, pretests and…
Descriptors: Comparative Analysis, Computer Assisted Instruction, Educational Environment, Essay Tests
Tiffany, Gerald E.; And Others – 1991
In 1991, a student learning outcomes assessment was conducted at Wenatchee Valley College, Washington. All English 101 students in the winter and spring quarters of 1990 wrote a 2-hour final exam. Winter quarter students wrote on the same topic while spring quarter students wrote on one of three randomly assigned topics. Five English 101…
Descriptors: Community Colleges, Comparative Analysis, Curriculum Evaluation, Essay Tests
Previous Page | Next Page ยป
Pages: 1  |  2