NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
John M. Norris; Shoko Sasayama; Michelle Kim – ETS Research Report Series, 2023
Accomplishing a communication task in the real world requires the ability not only to do the task per se but also to manage aspects of the context in which it occurs. For this reason, simulations of target language use contexts have been incorporated into the design of communicative language tests as a way of enhancing the authenticity of…
Descriptors: Electronic Mail, Writing (Composition), Task Analysis, Student Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lee, Shinhye – ETS Research Report Series, 2022
In response to the calls for making key stakeholders' perspectives relevant in the test validation process, the study discussed in this report sought test-taker feedback as part of collecting validity evidence and supporting the ongoing field testing efforts of the new "TOEFL ITP"® Speaking section. Specifically, I aimed to investigate…
Descriptors: English (Second Language), Second Language Learning, Language Tests, Test Validity
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Choi, Ikkyu; Zu, Jiyun – ETS Research Report Series, 2022
Synthetically generated speech (SGS) has become an integral part of our oral communication in a wide variety of contexts. It can be generated instantly at a low cost and allows precise control over multiple aspects of output, all of which can be highly appealing to second language (L2) assessment developers who have traditionally relied upon human…
Descriptors: Test Wiseness, Multiple Choice Tests, Test Items, Difficulty Level
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Galikyan, Irena; Madyarov, Irshat; Gasparyan, Rubina – ETS Research Report Series, 2019
The broad range of English language teaching and learning contexts present in the world today necessitates high quality assessment instruments that can provide reliable and meaningful information about learners' English proficiency levels to relevant stakeholders. The "TOEFL Junior"® tests were recently introduced by Educational Testing…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Student Attitudes
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sheehan, Kathleen M. – ETS Research Report Series, 2017
A model-based approach for matching language learners to texts of appropriate difficulty is described. Results are communicated to test takers via a targeted reading range expressed on the reporting scale of an automated text complexity measurement tool (ATCMT). Test takers can use this feedback to select reading materials that are well matched to…
Descriptors: Reading Ability, Second Language Learning, Difficulty Level, Reading Material Selection
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Chen, Jing; Sheehan, Kathleen M. – ETS Research Report Series, 2015
The "TOEFL"® family of assessments includes the "TOEFL"® Primary"™, "TOEFL Junior"®, and "TOEFL iBT"® tests. The linguistic complexity of stimulus passages in the reading sections of the TOEFL family of assessments is expected to differ across the test levels. This study evaluates the linguistic…
Descriptors: Language Tests, Second Language Learning, English (Second Language), Reading Comprehension
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Young, John W.; Morgan, Rick; Rybinski, Paul; Steinberg, Jonathan; Wang, Yuan – ETS Research Report Series, 2013
The "TOEFL Junior"® Standard Test is an assessment that measures the degree to which middle school-aged students learning English as a second language have attained proficiency in the academic and social English skills representative of English-medium instructional environments. The assessment measures skills in three areas: listening…
Descriptors: Item Response Theory, Test Items, Language Tests, Second Language Learning
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Liao, Chi-Wen; Livingston, Samuel A. – ETS Research Report Series, 2008
Randomly equivalent forms (REF) of tests in listening and reading for nonnative speakers of English were created by stratified random assignment of items to forms, stratifying on item content and predicted difficulty. The study included 50 replications of the procedure for each test. Each replication generated 2 REFs. The equivalence of those 2…
Descriptors: Equated Scores, Item Analysis, Test Items, Difficulty Level
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Cohen, Andrew D.; Upton, Thomas A. – ETS Research Report Series, 2006
This study describes the reading and test-taking strategies that test takers used in the Reading section of the LanguEdge courseware (ETS, 2002a). These materials were developed to familiarize prospective respondents with the new TOEFL®. The investigation focused on strategies used to respond to more traditional single selection multiple-choice…
Descriptors: Reading Tests, Test Items, Courseware, Item Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Broer, Markus; Lee, Yong-Won; Rizavi, Saba; Powers, Don – ETS Research Report Series, 2005
Three polytomous DIF detection techniques--the Mantel test, logistic regression, and polySTAND--were used to identify GRE® Analytical Writing prompts ("Issue" and "Argument") that are differentially difficult for (a) female test takers; (b) African American, Asian, and Hispanic test takers; and (c) test takers whose strongest…
Descriptors: Culture Fair Tests, Item Response Theory, Test Items, Cues