Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 14 |
Descriptor
Writing Evaluation | 16 |
Writing Tests | 16 |
English (Second Language) | 15 |
Language Tests | 14 |
Second Language Learning | 14 |
Essays | 9 |
Computer Assisted Testing | 8 |
Correlation | 8 |
Scoring | 7 |
Scores | 5 |
Accuracy | 4 |
More ▼ |
Source
Author
Attali, Yigal | 2 |
Bridgeman, Brent | 2 |
Kantor, Robert | 2 |
Lee, Yong-Won | 2 |
Plakans, Lia | 2 |
Abdi Tabari, Mahmoud | 1 |
Assefi, Farzaneh | 1 |
Baba, Kyoko | 1 |
Barkaoui, Khaled | 1 |
Bilki, Zeynep | 1 |
Breland, Hunter | 1 |
More ▼ |
Publication Type
Journal Articles | 14 |
Reports - Research | 11 |
Reports - Evaluative | 4 |
Dissertations/Theses -… | 1 |
Numerical/Quantitative Data | 1 |
Education Level
Higher Education | 8 |
Postsecondary Education | 6 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 16 |
Graduate Record Examinations | 2 |
International English… | 1 |
Praxis Series | 1 |
What Works Clearinghouse Rating
Tuc C. Chau – ProQuest LLC, 2023
The purpose of the current dissertation is to map the relationships between first language (L1), writing quality, and syntactic complexity, accuracy, lexical complexity, and fluency (CALF) in second language (L2) writing. CALF are characteristics of language production that have been of significant interest in L2 writing research for the past few…
Descriptors: Correlation, Native Language, Second Language Learning, Second Language Instruction
Monteiro, Kátia R.; Crossley, Scott A.; Kyle, Kristopher – Applied Linguistics, 2020
Lexical items that are encountered more frequently and in varying contexts have important effects on second language (L2) development because frequent and contextually diverse words are learned faster and become more entrenched in a learner's lexicon (Ellis 2002a, b). Despite evidence that L2 learners are generally exposed to non-native input,…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Benchmarking
Tywoniw, Rurik; Crossley, Scott – Language Education & Assessment, 2019
Cohesion features were calculated for a corpus of 960 essays by 480 test-takers from the Test of English as a Foreign Language (TOEFL) in order to examine differences in the use of cohesion devices between integrated (source-based) writing and independent writing samples. Cohesion indices were measured using an automated textual analysis tool, the…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Connected Discourse
Yao, Lili; Haberman, Shelby J.; Zhang, Mo – ETS Research Report Series, 2019
Many assessments of writing proficiency that aid in making high-stakes decisions consist of several essay tasks evaluated by a combination of human holistic scores and computer-generated scores for essay features such as the rate of grammatical errors per word. Under typical conditions, a summary writing score is provided by a linear combination…
Descriptors: Prediction, True Scores, Computer Assisted Testing, Scoring
Abdi Tabari, Mahmoud – Reading & Writing Quarterly, 2020
This study investigated the effects of strategic planning and task structure (personal, narrative, and decision-making tasks) on L2 writing outcomes. One hundred and twenty intermediate English as a foreign language learners were randomly divided into strategic-planning and no-planning-time groups. The strategic-planning group performed the three…
Descriptors: Decision Making, Second Language Learning, Second Language Instruction, Writing Instruction
Plakans, Lia; Gebril, Atta; Bilki, Zeynep – Language Testing, 2019
The present study investigates integrated writing assessment performances with regard to the linguistic features of complexity, accuracy, and fluency (CAF). Given the increasing presence of integrated tasks in large-scale and classroom assessments, validity evidence is needed for the claim that their scores reflect targeted language abilities.…
Descriptors: Accuracy, Language Tests, Scores, Writing Evaluation
Attali, Yigal; Sinharay, Sandip – ETS Research Report Series, 2015
The "e-rater"® automated essay scoring system is used operationally in the scoring of "TOEFL iBT"® independent and integrated tasks. In this study we explored the psychometric added value of reporting four trait scores for each of these two tasks, beyond the total e-rater score.The four trait scores are word choice, grammatical…
Descriptors: Writing Tests, Scores, Language Tests, English (Second Language)
Kalali, Nazanin Naderi; Pishkar, Kian – Advances in Language and Literary Studies, 2015
The main thrust of this study was to determine whether a genre-based instruction improve the writing proficiency of Iranian EFL learners. To this end, 30 homogenous Iranian BA learners studying English at Islamic Azad University, Bandar Abbas Branch were selected as the participants of the study through a version of TOEFL test as the proficiency…
Descriptors: Foreign Countries, Undergraduate Students, Second Language Learning, English (Second Language)
Barkaoui, Khaled – Language Testing, 2014
A major concern with computer-based (CB) tests of second-language (L2) writing is that performance on such tests may be influenced by test-taker keyboarding skills. Poor keyboarding skills may force test-takers to focus their attention and cognitive resources on motor activities (i.e., keyboarding) and, consequently, other processes and aspects of…
Descriptors: Language Tests, Computer Assisted Testing, English (Second Language), Second Language Learning
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M.; Davey, Tim; Bridgeman, Brent – ETS Research Report Series, 2012
Scoring models for the "e-rater"® system were built and evaluated for the "TOEFL"® exam's independent and integrated writing prompts. Prompt-specific and generic scoring models were built, and evaluation statistics, such as weighted kappas, Pearson correlations, standardized differences in mean scores, and correlations with…
Descriptors: Scoring, Prompting, Evaluators, Computer Software
Tabatabaei, Omid; Assefi, Farzaneh – English Language Teaching, 2012
Nowadays, writing has received a great degree of attention not only because it plays a significant role in transforming knowledge and learning but also in fostering creativity and when acquiring of a special language skill is seen as important, its assessment becomes important as well and writing is no exception. This study intended to investigate…
Descriptors: English (Second Language), Second Language Learning, Portfolios (Background Materials), Writing Evaluation
Yang, Hui-Chun; Plakans, Lia – TESOL Quarterly: A Journal for Teachers of English to Speakers of Other Languages and of Standard English as a Second Dialect, 2012
Integrated writing tasks that involve different language modalities such as reading and listening have increasingly been used as means to assess academic writing. Thus, there is a need for understanding how test-takers coordinate different skills to complete these tasks. This study explored second language writers' strategy use and its…
Descriptors: Writing Evaluation, Writing Strategies, Structural Equation Models, Second Language Learning
Lee, Yong-Won; Gentile, Claudia; Kantor, Robert – Applied Linguistics, 2010
The main purpose of the study was to investigate the distinctness and reliability of analytic (or multi-trait) rating dimensions and their relationships to holistic scores and "e-rater"[R] essay feature variables in the context of the TOEFL[R] computer-based test (TOEFL CBT) writing assessment. Data analyzed in the study were holistic…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays
Attali, Yigal; Bridgeman, Brent; Trapani, Catherine – Journal of Technology, Learning, and Assessment, 2010
A generic approach in automated essay scoring produces scores that have the same meaning across all prompts, existing or new, of a writing assessment. This is accomplished by using a single set of linguistic indicators (or features), a consistent way of combining and weighting these features into essay scores, and a focus on features that are not…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Test Scoring Machines
Breland, Hunter; Lee, Yong-Won; Najarian, Michelle; Muraki, Eiji – Educational Testing Service, 2004
This investigation of the comparability of writing assessment prompts was conducted in two phases. In an exploratory Phase I, 47 writing prompts administered in the computer-based Test of English as a Foreign Language[TM] (TOEFL[R] CBT) from July through December 1998 were examined. Logistic regression procedures were used to estimate prompt…
Descriptors: Writing Evaluation, Quality Control, Gender Differences, Writing Tests
Previous Page | Next Page »
Pages: 1 | 2