NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 35 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Finn, Bridgid; Arslan, Burcu; Walsh, Matthew – Applied Measurement in Education, 2020
To score an essay response, raters draw on previously trained skills and knowledge about the underlying rubric and score criterion. Cognitive processes such as remembering, forgetting, and skill decay likely influence rater performance. To investigate how forgetting influences scoring, we evaluated raters' scoring accuracy on TOEFL and GRE essays.…
Descriptors: Epistemology, Essay Tests, Evaluators, Cognitive Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Nazerian, Samaneh; Abbasian, Gholam-Reza; Mohseni, Ahmad – Cogent Education, 2021
Despite growing interest in the studies on Zone of Proximal Development (ZPD), its operation in the forms of individualized and group-wide has been controversial. To cast some empirical light on the issue, this study was designed to study the applicability of the two scenarios of ZPD-based instructions to the writing accuracy of two levels of…
Descriptors: Sociocultural Patterns, Second Language Learning, Second Language Instruction, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Ahmadi Shirazi, Masoumeh – SAGE Open, 2019
Threats to construct validity should be reduced to a minimum. If true, sources of bias, namely raters, items, tests as well as gender, age, race, language background, culture, and socio-economic status need to be spotted and removed. This study investigates raters' experience, language background, and the choice of essay prompt as potential…
Descriptors: Foreign Countries, Language Tests, Test Bias, Essay Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Beigman Klebanov, Beata; Ramineni, Chaitanya; Kaufer, David; Yeoh, Paul; Ishizaki, Suguru – Language Testing, 2019
Essay writing is a common type of constructed-response task used frequently in standardized writing assessments. However, the impromptu timed nature of the essay writing tests has drawn increasing criticism for the lack of authenticity for real-world writing in classroom and workplace settings. The goal of this paper is to contribute evidence to a…
Descriptors: Test Validity, Writing Tests, Writing Skills, Persuasive Discourse
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Moglen, Daniel – CATESOL Journal, 2015
This article will consider using TOEFL scores for purposes of placement and advising for international graduate students at a northern California research university. As the number of international students is on the rise and the funds for the graduate ESL program are diminishing, the way in which the university is handling the influx of…
Descriptors: Foreign Students, Graduate Students, Student Placement, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Sawaki, Yasuyo; Quinlan, Thomas; Lee, Yong-Won – Language Assessment Quarterly, 2013
The present study examined the factor structures across features of 446 examinees' responses to a writing task that integrates reading and listening modalities as well as reading and listening comprehension items of the TOEFL iBT[R] (Internet-based test). Both human and automated scores obtained for the integrated essays were utilized. Based on a…
Descriptors: English (Second Language), Language Tests, Essay Tests, Factor Structure
Attali, Yigal – Educational Testing Service, 2011
The e-rater[R] automated essay scoring system is used operationally in the scoring of TOEFL iBT[R] independent essays. Previous research has found support for a 3-factor structure of the e-rater features. This 3-factor structure has an attractive hierarchical linguistic interpretation with a word choice factor, a grammatical convention within a…
Descriptors: Essay Tests, Language Tests, Test Scoring Machines, Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Bridgeman, Brent; Trapani, Catherine; Attali, Yigal – Applied Measurement in Education, 2012
Essay scores generated by machine and by human raters are generally comparable; that is, they can produce scores with similar means and standard deviations, and machine scores generally correlate as highly with human scores as scores from one human correlate with scores from another human. Although human and machine essay scores are highly related…
Descriptors: Scoring, Essay Tests, College Entrance Examinations, High Stakes Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Barkaoui, Khaled – Language Assessment Quarterly, 2013
This article critiques traditional single-level statistical approaches (e.g., multiple regression analysis) to examining relationships between language test scores and variables in the assessment setting. It highlights the conceptual, methodological, and statistical problems associated with these techniques in dealing with multilevel or nested…
Descriptors: Hierarchical Linear Modeling, Statistical Analysis, Multiple Regression Analysis, Generalizability Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Burstein, Jill; Flor, Michael; Tetreault, Joel; Madnani, Nitin; Holtzman, Steven – ETS Research Report Series, 2012
This annotation study is designed to help us gain an increased understanding of paraphrase strategies used by native and nonnative English speakers and how these strategies might affect test takers' essay scores. Toward that end, this study aims to examine and analyze the paraphrase and the types of linguistic modifications used in paraphrase in…
Descriptors: Essay Tests, Scores, Native Speakers, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Weigle, Sara Cushing – Assessing Writing, 2013
This article presents considerations for using automated scoring systems to evaluate second language writing. A distinction is made between English language learners in English-medium educational systems and those studying English in their own countries for a variety of purposes, and between learning-to-write and writing-to-learn in a second…
Descriptors: Scoring, Second Language Learning, Second Languages, English Language Learners
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Blanchard, Daniel; Tetreault, Joel; Higgins, Derrick; Cahill, Aoife; Chodorow, Martin – ETS Research Report Series, 2013
This report presents work on the development of a new corpus of non-native English writing. It will be useful for the task of native language identification, as well as grammatical error detection and correction, and automatic essay scoring. In this report, the corpus is described in detail.
Descriptors: Language Tests, Second Language Learning, English (Second Language), Writing Tests
Attali, Yigal – Educational Testing Service, 2011
This paper proposes an alternative content measure for essay scoring, based on the "difference" in the relative frequency of a word in high-scored versus low-scored essays. The "differential word use" (DWU) measure is the average of these differences across all words in the essay. A positive value indicates the essay is using…
Descriptors: Scoring, Essay Tests, Word Frequency, Content Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Cubilo, Justin; Winke, Paula – Language Assessment Quarterly, 2013
Researchers debate whether listening tasks should be supported by visuals. Most empirical research in this area has been conducted on the effects of visual support on listening comprehension tasks employing multiple-choice questions. The present study seeks to expand this research by investigating the effects of video listening passages (vs.…
Descriptors: Listening Comprehension Tests, Visual Stimuli, Writing Tests, Video Technology
Quinlan, Thomas; Higgins, Derrick; Wolff, Susanne – Educational Testing Service, 2009
This report evaluates the construct coverage of the e-rater[R[ scoring engine. The matter of construct coverage depends on whether one defines writing skill, in terms of process or product. Originally, the e-rater engine consisted of a large set of components with a proven ability to predict human holistic scores. By organizing these capabilities…
Descriptors: Guides, Writing Skills, Factor Analysis, Writing Tests
Previous Page | Next Page ยป
Pages: 1  |  2  |  3