Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 7 |
Since 2016 (last 10 years) | 11 |
Since 2006 (last 20 years) | 16 |
Descriptor
Source
Author
Haberman, Shelby J. | 2 |
Abbasian, Gholam-Reza | 1 |
Ali Akbar Ariamanesh | 1 |
Ariamanesh, Ali A. | 1 |
Ashwell, Tim | 1 |
Barati, Hossein | 1 |
Bejar, Isaac I. | 1 |
Bilki, Zeynep | 1 |
Brooks, Lindsay | 1 |
Cheng, Liying | 1 |
Clothier, Josh | 1 |
More ▼ |
Publication Type
Journal Articles | 14 |
Reports - Research | 12 |
Information Analyses | 2 |
Dissertations/Theses -… | 1 |
Reports - Descriptive | 1 |
Education Level
Higher Education | 6 |
Postsecondary Education | 6 |
Elementary Education | 1 |
Audience
Teachers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 16 |
Graduate Record Examinations | 1 |
Praxis Series | 1 |
Raven Progressive Matrices | 1 |
What Works Clearinghouse Rating
Ali Akbar Ariamanesh; Hossein Barati; Manijeh Youhanaee – International Journal of Language Testing, 2023
The present study investigates the efficacy of preparation time in four speaking tasks of TOEFL iBT. As the current pre-task planning time offered by ETS is very short, 15 to 30 seconds, we intended to explore how the test-takers' speaking quality would change if the preparation time was added to the response time, giving the respondents a…
Descriptors: English (Second Language), Second Language Learning, Language Tests, Test Preparation
Omid S. Kalantar – International Journal of Language Testing, 2024
This study sought to identify the challenges and needs of TOEFL iBT candidates in achieving C1 level scores in the speaking and writing sections of the exam. To this end, the researcher employed a mixed-method approach to collect data from a population of 46 students, both male and female, between the ages of 22 and 30. The participants were…
Descriptors: Language Tests, Scores, Native Language, Grammar
Davoodifard, Mahshad – Studies in Applied Linguistics & TESOL, 2022
Over the past 40 years, second language educators and assessors have come to the realization that investigating the process of writing can shed light on language teaching, learning and assessment practices (Odendahl & Deane, 2018). What L2 writers do and think while writing can provide links between the task, the related construct and the…
Descriptors: Writing Processes, Accuracy, Teaching Methods, Writing Instruction
Jones, Daniel Marc; Cheng, Liying; Tweedie, M. Gregory – Canadian Journal of Learning and Technology, 2022
This article reviews recent literature (2011-present) on the automated scoring (AS) of writing and speaking. Its purpose is to first survey the current research on automated scoring of language, then highlight how automated scoring impacts the present and future of assessment, teaching, and learning. The article begins by outlining the general…
Descriptors: Automation, Computer Assisted Testing, Scoring, Writing (Composition)
Ariamanesh, Ali A.; Barati, Hossein; Youhanaee, Manijeh – International TESOL Journal, 2022
The present study investigated the speaking module of TOEFL iBT with an emphasis on the dichotomy of independent and integrated tasks. The potential differences between the two speaking conditions were intended to be explored based on the oral performance elicited from a group of Iranian test takers. To collect the required data, a simulated…
Descriptors: Second Language Learning, English (Second Language), Language Tests, Computer Assisted Testing
Nazerian, Samaneh; Abbasian, Gholam-Reza; Mohseni, Ahmad – Cogent Education, 2021
Despite growing interest in the studies on Zone of Proximal Development (ZPD), its operation in the forms of individualized and group-wide has been controversial. To cast some empirical light on the issue, this study was designed to study the applicability of the two scenarios of ZPD-based instructions to the writing accuracy of two levels of…
Descriptors: Sociocultural Patterns, Second Language Learning, Second Language Instruction, English (Second Language)
Yao, Lili; Haberman, Shelby J.; Zhang, Mo – ETS Research Report Series, 2019
Many assessments of writing proficiency that aid in making high-stakes decisions consist of several essay tasks evaluated by a combination of human holistic scores and computer-generated scores for essay features such as the rate of grammatical errors per word. Under typical conditions, a summary writing score is provided by a linear combination…
Descriptors: Prediction, True Scores, Computer Assisted Testing, Scoring
Frost, Kellie; Clothier, Josh; Huisman, Annemiek; Wigglesworth, Gillian – Language Testing, 2020
Integrated speaking tasks requiring test takers to read and/or listen to stimulus texts and to incorporate their content into oral performances are now used in large-scale, high-stakes tests, including the TOEFL iBT. These tasks require test takers to identify, select, and combine relevant source text information to recognize key relationships…
Descriptors: Discourse Analysis, Scoring Rubrics, Speech Communication, English (Second Language)
Hannah, L.; Kim, H.; Jang, E. E. – Language Assessment Quarterly, 2022
As a branch of artificial intelligence, automated speech recognition (ASR) technology is increasingly used to detect speech, process it to text, and derive the meaning of natural language for various learning and assessment purposes. ASR inaccuracy may pose serious threats to valid score interpretations and fair score use for all when it is…
Descriptors: Task Analysis, Artificial Intelligence, Speech Communication, Audio Equipment
Plakans, Lia; Gebril, Atta; Bilki, Zeynep – Language Testing, 2019
The present study investigates integrated writing assessment performances with regard to the linguistic features of complexity, accuracy, and fluency (CAF). Given the increasing presence of integrated tasks in large-scale and classroom assessments, validity evidence is needed for the claim that their scores reflect targeted language abilities.…
Descriptors: Accuracy, Language Tests, Scores, Writing Evaluation
Ling, Guangming; Mollaun, Pamela; Xi, Xiaoming – Language Testing, 2014
The scoring of constructed responses may introduce construct-irrelevant factors to a test score and affect its validity and fairness. Fatigue is one of the factors that could negatively affect human performance in general, yet little is known about its effects on a human rater's scoring quality on constructed responses. In this study, we compared…
Descriptors: Evaluators, Fatigue (Biology), Scoring, Performance
Ashwell, Tim; Elam, Jesse R. – JALT CALL Journal, 2017
The ultimate aim of our research project was to use the Google Web Speech API to automate scoring of elicited imitation (EI) tests. However, in order to achieve this goal, we had to take a number of preparatory steps. We needed to assess how accurate this speech recognition tool is in recognizing native speakers' production of the test items; we…
Descriptors: English (Second Language), Second Language Learning, Second Language Instruction, Language Tests
Brooks, Lindsay; Swain, Merrill – Language Assessment Quarterly, 2014
In this study we compare test takers' performance on the Speaking section of the TOEFL iBT™and their performances during their real-life academic studies. Thirty international graduate students from mixed language backgrounds in two different disciplines (Sciences and Social Sciences) responded to two independent and four integrated speaking tasks…
Descriptors: Comparative Analysis, English (Second Language), Second Language Learning, Language Tests
Haberman, Shelby J. – Educational Testing Service, 2011
Alternative approaches are discussed for use of e-rater[R] to score the TOEFL iBT[R] Writing test. These approaches involve alternate criteria. In the 1st approach, the predicted variable is the expected rater score of the examinee's 2 essays. In the 2nd approach, the predicted variable is the expected rater score of 2 essay responses by the…
Descriptors: Writing Tests, Scoring, Essays, Language Tests
Davis, Lawrence Edward – ProQuest LLC, 2012
Speaking performance tests typically employ raters to produce scores; accordingly, variability in raters' scoring decisions has important consequences for test reliability and validity. One such source of variability is the rater's level of expertise in scoring. Therefore, it is important to understand how raters' performance is influenced by…
Descriptors: Evaluators, Expertise, Scores, Second Language Learning
Previous Page | Next Page »
Pages: 1 | 2