NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20260
Since 20250
Since 2022 (last 5 years)0
Since 2017 (last 10 years)3
Since 2007 (last 20 years)6
Publication Type
Reports - Descriptive16
Journal Articles12
Reports - Evaluative1
Speeches/Meeting Papers1
Audience
Researchers2
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 16 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Bateson, Gordon – International Journal of Computer-Assisted Language Learning and Teaching, 2021
As a result of the Japanese Ministry of Education's recent edict that students' written and spoken English should be assessed in university entrance exams, there is an urgent need for tools to help teachers and students prepare for these exams. Although some commercial tools already exist, they are generally expensive and inflexible. To address…
Descriptors: Test Construction, Computer Assisted Testing, Internet, Writing Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sumner, Josh – Research-publishing.net, 2021
Comparative Judgement (CJ) has emerged as a technique that typically makes use of holistic judgement to assess difficult-to-specify constructs such as production (speaking and writing) in Modern Foreign Languages (MFL). In traditional approaches, markers assess candidates' work one-by-one in an absolute manner, assigning scores to different…
Descriptors: Holistic Approach, Student Evaluation, Comparative Analysis, Decision Making
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Walker, Paul – Composition Forum, 2017
This article describes and theorizes a failed writing program assessment study to question the influence of "the rhetoric of agreement," or reliability, on writing assessment practice and its prevalence in validating institutional mandated assessments. Offering the phrase "dwelling in disagreement" as a queer perspective, the…
Descriptors: Rhetoric, Writing Tests, Test Reliability, Program Validation
Peer reviewed Peer reviewed
Direct linkDirect link
Humphry, Stephen M.; McGrane, Joshua A. – Australian Educational Researcher, 2015
This paper presents a method for equating writing assessments using pairwise comparisons which does not depend upon conventional common-person or common-item equating designs. Pairwise comparisons have been successfully applied in the assessment of open-ended tasks in English and other areas such as visual art and philosophy. In this paper,…
Descriptors: Writing Evaluation, Evaluation Methods, Comparative Analysis, Writing Tests
Haberman, Shelby J. – Educational Testing Service, 2011
Alternative approaches are discussed for use of e-rater[R] to score the TOEFL iBT[R] Writing test. These approaches involve alternate criteria. In the 1st approach, the predicted variable is the expected rater score of the examinee's 2 essays. In the 2nd approach, the predicted variable is the expected rater score of 2 essay responses by the…
Descriptors: Writing Tests, Scoring, Essays, Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Huot, Brian; O'Neill, Peggy; Moore, Cindy – College English, 2010
Writing program administrators and other composition specialists need to know the history of writing assessment in order to create a rich and responsible culture of it today. In its first fifty years, the field of writing assessment followed educational measurement in general by focusing on issues of reliability, whereas in its next fifty years,…
Descriptors: Writing (Composition), Writing Evaluation, Writing Tests, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Schley, Sara; Albertini, John – Journal of Deaf Studies and Deaf Education, 2005
The NTID Writing Test was developed to assess the writing ability of postsecondary deaf students entering the National Technical Institute for the Deaf and to determine their appropriate placement into developmental writing courses. While previous research (Albertini et al., 1986; Albertini et al., 1996; Bochner, Albertini, Samar, & Metz, 1992)…
Descriptors: Deafness, Writing Ability, Writing Tests, College Students
Peer reviewed Peer reviewed
Wang, Tianyou; Kolen, Michael J.; Harris, Deborah J. – Journal of Educational Measurement, 2000
Describes procedures for calculating conditional standard error of measurement (CSEM) and reliability of scale scores and classification of consistency of performance levels. Applied these procedures to data from the American College Testing Program's Work Keys Writing Assessment with sample sizes of 7,097, 1,035, and 1,793. Results show that the…
Descriptors: Adults, Classification, Error of Measurement, Item Response Theory
Lee, Yong-Won – 2001
An essay test is now an integral part of the computer based Test of English as a Foreign Language (TOEFL-CBT). This paper provides a brief overview of the current TOEFL-CBT essay test, describes the operational procedures for essay scoring, including the Online Scoring Network (OSN) of the Educational Testing Service (ETS), and discusses major…
Descriptors: Computer Assisted Testing, English (Second Language), Essay Tests, Interrater Reliability
Poteet, James A. – Diagnostique, 1990
The Test of Written Language-2, for use with students ages 7-17, identifies low achievement, determines strengths and weaknesses, and documents progress. The test assesses three language components (conventional, linguistic, and conceptual) using two formats (contrived and spontaneous.) This paper describes the test's administration, scoring,…
Descriptors: Achievement Tests, Elementary Secondary Education, Low Achievement, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Slomp, David H.; Fuite, Jim – Assessing Writing, 2004
Specialists in the field of large-scale, high-stakes writing assessment have, over the last forty years alternately discussed the issue of maximizing either reliability or validity in test design. Factors complicating the debate--such as Messick's (1989) expanded definition of validity, and the ethical implications of testing--are explored. An…
Descriptors: Information Theory, Writing Evaluation, Writing Tests, Test Validity
Freedman, Sarah Warshauer – 1991
Writing teachers and educators can add to information from large-scale testing and teachers can strengthen classroom assessment by creating a tight fit between large-scale testing and classroom assessment. Across the years, large-scale testing programs have struggled with a difficult problem: how to evaluate student writing reliably and…
Descriptors: Elementary Secondary Education, Foreign Countries, Informal Assessment, Portfolios (Background Materials)
Taylor, Ronald L. – Diagnostique, 1990
The Woodcock-Johnson Psycho-Educational Battery-Revised is a set of tests designed to measure cognitive abilities, scholastic aptitude, and achievement in the areas of reading, mathematics, and written language, in individuals aged 2-95 years. This paper describes the test battery's administration, scoring, standardization, reliability, and…
Descriptors: Academic Achievement, Academic Aptitude, Achievement Tests, Adults
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dikli, Semire – Journal of Technology, Learning, and Assessment, 2006
Automated Essay Scoring (AES) is defined as the computer technology that evaluates and scores the written prose (Shermis & Barrera, 2002; Shermis & Burstein, 2003; Shermis, Raymat, & Barrera, 2003). AES systems are mainly used to overcome time, cost, reliability, and generalizability issues in writing assessment (Bereiter, 2003; Burstein,…
Descriptors: Scoring, Writing Evaluation, Writing Tests, Standardized Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Burstein, Jill – Journal of Technology, Learning, and Assessment, 2006
E-rater[R] has been used by the Educational Testing Service for automated essay scoring since 1999. This paper describes a new version of e-rater (V.2) that is different from other automated essay scoring systems in several important respects. The main innovations of e-rater V.2 are a small, intuitive, and meaningful set of features used for…
Descriptors: Educational Testing, Test Scoring Machines, Scoring, Writing Evaluation
Previous Page | Next Page ยป
Pages: 1  |  2