Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 11 |
Descriptor
Comparative Analysis | 16 |
Scoring | 16 |
English (Second Language) | 14 |
Language Tests | 14 |
Second Language Learning | 11 |
Foreign Countries | 6 |
Computer Assisted Testing | 5 |
Evaluators | 5 |
Scores | 5 |
Correlation | 4 |
Essay Tests | 4 |
More ▼ |
Source
ETS Research Report Series | 3 |
JALT CALL Journal | 2 |
Language Assessment Quarterly | 2 |
Advances in Language and… | 1 |
Applied Measurement in… | 1 |
Grantee Submission | 1 |
Language Learning | 1 |
Language Testing | 1 |
TESL Canada Journal | 1 |
Author
Attali, Yigal | 4 |
Xi, Xiaoming | 2 |
Allen, Laura K. | 1 |
Ashwell, Tim | 1 |
Bridgeman, Brent | 1 |
Burstein, Jill | 1 |
Crossley, Scott A. | 1 |
Des Brisay, Margaret | 1 |
Elam, Jesse R. | 1 |
Golub-Smith, Marna | 1 |
Guo, Liang | 1 |
More ▼ |
Publication Type
Journal Articles | 13 |
Reports - Research | 11 |
Reports - Evaluative | 4 |
Tests/Questionnaires | 3 |
Speeches/Meeting Papers | 1 |
Education Level
Higher Education | 4 |
Postsecondary Education | 4 |
Elementary Education | 2 |
Grade 10 | 1 |
Grade 11 | 1 |
Grade 12 | 1 |
Grade 6 | 1 |
Grade 7 | 1 |
Grade 8 | 1 |
Grade 9 | 1 |
High Schools | 1 |
More ▼ |
Audience
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 16 |
Graduate Management Admission… | 1 |
Graduate Record Examinations | 1 |
Test of Written English | 1 |
What Works Clearinghouse Rating
Hannah, L.; Kim, H.; Jang, E. E. – Language Assessment Quarterly, 2022
As a branch of artificial intelligence, automated speech recognition (ASR) technology is increasingly used to detect speech, process it to text, and derive the meaning of natural language for various learning and assessment purposes. ASR inaccuracy may pose serious threats to valid score interpretations and fair score use for all when it is…
Descriptors: Task Analysis, Artificial Intelligence, Speech Communication, Audio Equipment
Heidari, Jamshid; Khodabandeh, Farzaneh; Soleimani, Hassan – JALT CALL Journal, 2018
The emergence of computer technology in English language teaching has paved the way for teachers' application of Mobile Assisted Language Learning (mall) and its advantages in teaching. This study aimed to compare the effectiveness of the face to face instruction with Telegram mobile instruction. Based on a toefl test, 60 English foreign language…
Descriptors: Comparative Analysis, Conventional Instruction, Teaching Methods, Computer Assisted Instruction
Attali, Yigal; Sinharay, Sandip – ETS Research Report Series, 2015
The "e-rater"® automated essay scoring system is used operationally in the scoring of "TOEFL iBT"® independent and integrated tasks. In this study we explored the psychometric added value of reporting four trait scores for each of these two tasks, beyond the total e-rater score.The four trait scores are word choice, grammatical…
Descriptors: Writing Tests, Scores, Language Tests, English (Second Language)
Ashwell, Tim; Elam, Jesse R. – JALT CALL Journal, 2017
The ultimate aim of our research project was to use the Google Web Speech API to automate scoring of elicited imitation (EI) tests. However, in order to achieve this goal, we had to take a number of preparatory steps. We needed to assess how accurate this speech recognition tool is in recognizing native speakers' production of the test items; we…
Descriptors: English (Second Language), Second Language Learning, Second Language Instruction, Language Tests
Wei, Jing; Llosa, Lorena – Language Assessment Quarterly, 2015
This article reports on an investigation of the role raters' language background plays in raters' assessment of test takers' speaking ability. Specifically, this article examines differences between American and Indian raters in their scores and scoring processes when rating Indian test takers' responses to the Test of English as a Foreign…
Descriptors: North Americans, Indians, Evaluators, English (Second Language)
Crossley, Scott A.; Kyle, Kristopher; Allen, Laura K.; Guo, Liang; McNamara, Danielle S. – Grantee Submission, 2014
This study investigates the potential for linguistic microfeatures related to length, complexity, cohesion, relevance, topic, and rhetorical style to predict L2 writing proficiency. Computational indices were calculated by two automated text analysis tools (Coh- Metrix and the Writing Assessment Tool) and used to predict human essay ratings in a…
Descriptors: Computational Linguistics, Essays, Scoring, Writing Evaluation
Bridgeman, Brent; Trapani, Catherine; Attali, Yigal – Applied Measurement in Education, 2012
Essay scores generated by machine and by human raters are generally comparable; that is, they can produce scores with similar means and standard deviations, and machine scores generally correlate as highly with human scores as scores from one human correlate with scores from another human. Although human and machine essay scores are highly related…
Descriptors: Scoring, Essay Tests, College Entrance Examinations, High Stakes Tests
Kalali, Nazanin Naderi; Pishkar, Kian – Advances in Language and Literary Studies, 2015
The main thrust of this study was to determine whether a genre-based instruction improve the writing proficiency of Iranian EFL learners. To this end, 30 homogenous Iranian BA learners studying English at Islamic Azad University, Bandar Abbas Branch were selected as the participants of the study through a version of TOEFL test as the proficiency…
Descriptors: Foreign Countries, Undergraduate Students, Second Language Learning, English (Second Language)
Xi, Xiaoming; Higgins, Derrick; Zechner, Klaus; Williamson, David – Language Testing, 2012
This paper compares two alternative scoring methods--multiple regression and classification trees--for an automated speech scoring system used in a practice environment. The two methods were evaluated on two criteria: construct representation and empirical performance in predicting human scores. The empirical performance of the two scoring models…
Descriptors: Scoring, Classification, Weighted Scores, Comparative Analysis
Xi, Xiaoming; Mollaun, Pam – Language Learning, 2011
We investigated the scoring of the Speaking section of the Test of English as a Foreign Language[TM] Internet-based (TOEFL iBT[R]) test by speakers of English and one or more Indian languages. We explored the extent to which raters from India, after being trained and certified, were able to score the TOEFL examinees with mixed first languages…
Descriptors: Speech Communication, Scoring, Foreign Countries, English (Second Language)
Golub-Smith, Marna; And Others – 1993
The Test of Written English (TWE), administered with certain designated examinations of the Test of English as a Foreign Language (TOEFL), consists of a single essay prompt to which examinees have 30 minutes to respond. Questions have been raised about the comparability of different TWE prompts. This study was designed to elicit essays for prompts…
Descriptors: Charts, Comparative Analysis, English (Second Language), Essay Tests
Spolsky, Bernard – 1990
A discussion of the differences between the Test of English as a Foreign Language (TOEFL), an American test battery, and the Cambridge English Examinations (Cambridge), a British battery, focuses on the different approaches to language test development embodied in the tests as the source of difficulty in translating between them for individual…
Descriptors: Comparative Analysis, Cultural Differences, English (Second Language), Foreign Countries
Attali, Yigal – ETS Research Report Series, 2007
This study examined the construct validity of the "e-rater"® automated essay scoring engine as an alternative to human scoring in the context of TOEFL® essay writing. Analyses were based on a sample of students who repeated the TOEFL within a short time period. Two "e-rater" scores were investigated in this study, the first…
Descriptors: Construct Validity, Computer Assisted Testing, Scoring, English (Second Language)
Henning, Grant; And Others – 1995
A prototype revised form of the Test of Spoken English (TSE) was compared with the current version of the same test, comparing interrater reliability, frequency of rater discrepancy at all score levels, component task adequacy, scoring efficacy, and other concurrent and construct validity evidence, including the oral proficiency interview…
Descriptors: Adults, College Students, Comparative Analysis, English (Second Language)
Attali, Yigal; Burstein, Jill – ETS Research Report Series, 2005
The e-rater® system has been used by ETS for automated essay scoring since 1999. This paper describes a new version of e-rater (v.2.0) that differs from the previous one (v.1.3) with regard to the feature set and model building approach. The paper describes the new version, compares the new and previous versions in terms of performance, and…
Descriptors: Essay Tests, Automation, Scoring, Comparative Analysis
Previous Page | Next Page »
Pages: 1 | 2