NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deniz, Kaan Zulfikar; Ilican, Emel – International Journal of Assessment Tools in Education, 2021
This study aims to compare the G and Phi coefficients as estimated by D studies for a measurement tool with the G and Phi coefficients obtained from real cases in which items of differing difficulty levels were added and also to determine the conditions under which the D studies estimated reliability coefficients closer to reality. The study group…
Descriptors: Generalizability Theory, Test Items, Difficulty Level, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Bastianello, Tamara; Brondino, Margherita; Persici, Valentina; Majorano, Marinella – Journal of Research in Childhood Education, 2023
The present contribution aims at presenting an assessment tool (i.e., the TALK-assessment) built to evaluate the language development and school readiness of Italian preschoolers before they enter primary school, and its predictive validity for the children's reading and writing skills at the end of the first year of primary school. The early…
Descriptors: Literacy, Computer Assisted Testing, Italian, Language Acquisition
Peer reviewed Peer reviewed
Direct linkDirect link
Cho, Yeonsuk; Rijmen, Frank; Novák, Jakub – Language Testing, 2013
This study examined the influence of prompt characteristics on the averages of all scores given to test taker responses on the TOEFL iBT[TM] integrated Read-Listen-Write (RLW) writing tasks for multiple administrations from 2005 to 2009. In the context of TOEFL iBT RLW tasks, the prompt consists of a reading passage and a lecture. To understand…
Descriptors: English (Second Language), Language Tests, Writing Tests, Cues
Peer reviewed Peer reviewed
Direct linkDirect link
Harsch, Claudia; Rupp, Andre Alexander – Language Assessment Quarterly, 2011
The "Common European Framework of Reference" (CEFR; Council of Europe, 2001) provides a competency model that is increasingly used as a point of reference to compare language examinations. Nevertheless, aligning examinations to the CEFR proficiency levels remains a challenge. In this article, we propose a new, level-centered approach to…
Descriptors: Language Tests, Writing Tests, Test Construction, Test Items
Meyers, Jason L.; Murphy, Stephen; Goodman, Joshua; Turhan, Ahmet – Pearson, 2012
Operational testing programs employing item response theory (IRT) applications benefit from of the property of item parameter invariance whereby item parameter estimates obtained from one sample can be applied to other samples (when the underlying assumptions are satisfied). In theory, this feature allows for applications such as computer-adaptive…
Descriptors: Equated Scores, Test Items, Test Format, Item Response Theory
Hamp-Lyons, Liz; Prochnow, Sheila – 1991
This study investigated the effect of writing task topic on learner performance in a second-language writing test, in this case the Michigan English Language Assessment Battery designed to test proficiency in English as a Second Language. The 64 topics or "prompts" used in the test (offered as pairs of options) were categorized according…
Descriptors: Cues, Difficulty Level, English (Second Language), Language Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Broer, Markus; Lee, Yong-Won; Rizavi, Saba; Powers, Don – ETS Research Report Series, 2005
Three polytomous DIF detection techniques--the Mantel test, logistic regression, and polySTAND--were used to identify GRE® Analytical Writing prompts ("Issue" and "Argument") that are differentially difficult for (a) female test takers; (b) African American, Asian, and Hispanic test takers; and (c) test takers whose strongest…
Descriptors: Culture Fair Tests, Item Response Theory, Test Items, Cues