Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 5 |
Descriptor
Author
Bridgeman, Brent | 1 |
Cohen, Andrew D. | 1 |
Coleman, Chris | 1 |
Coombe, Christine | 1 |
Davidson, Peter | 1 |
Gregg, K. Noel | 1 |
Lindstrom, Jennifer | 1 |
Lindstrom, William | 1 |
Nelson, Jason | 1 |
Toker, Deniz | 1 |
Upton, Thomas A. | 1 |
More ▼ |
Publication Type
Reports - Evaluative | 6 |
Journal Articles | 5 |
Education Level
Higher Education | 3 |
Grade 12 | 1 |
Postsecondary Education | 1 |
Secondary Education | 1 |
Audience
Location
United Arab Emirates | 1 |
United States | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 6 |
Graduate Record Examinations | 1 |
Gray Oral Reading Test | 1 |
International English… | 1 |
Nelson Denny Reading Tests | 1 |
SAT (College Admission Test) | 1 |
Stanford Achievement Tests | 1 |
What Works Clearinghouse Rating
Toker, Deniz – TESL-EJ, 2019
The central purpose of this paper is to examine validity problems arising from the multiple-choice items and technical passages in the Test of English as a Foreign Language Internet-based Test (TOEFL iBT) reading section, primarily concentrating on construct-irrelevant variance (Messick, 1989). My personal TOEFL iBT experience, along with my…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Computer Assisted Testing
Bridgeman, Brent – Educational Measurement: Issues and Practice, 2016
Scores on essay-based assessments that are part of standardized admissions tests are typically given relatively little weight in admissions decisions compared to the weight given to scores from multiple-choice assessments. Evidence is presented to suggest that more weight should be given to these assessments. The reliability of the writing scores…
Descriptors: Multiple Choice Tests, Scores, Standardized Tests, Comparative Analysis
Coombe, Christine; Davidson, Peter – Language Testing, 2014
The Common Educational Proficiency Assessment (CEPA) is a large-scale, high-stakes, English language proficiency/placement test administered in the United Arab Emirates to Emirati nationals in their final year of secondary education or Grade 12. The purpose of the CEPA is to place students into English classes at the appropriate government…
Descriptors: Language Tests, High Stakes Tests, English (Second Language), Second Language Learning
Coleman, Chris; Lindstrom, Jennifer; Nelson, Jason; Lindstrom, William; Gregg, K. Noel – Journal of Learning Disabilities, 2010
The comprehension section of the "Nelson-Denny Reading Test" (NDRT) is widely used to assess the reading comprehension skills of adolescents and adults in the United States. In this study, the authors explored the content validity of the NDRT Comprehension Test (Forms G and H) by asking university students (with and without at-risk…
Descriptors: Reading Comprehension, Reading Difficulties, Reading Tests, Content Validity
Cohen, Andrew D.; Upton, Thomas A. – Language Testing, 2007
This study describes the reading and test-taking strategies that test takers used on the "Reading" section of the "LanguEdge Courseware" (2002) materials developed to familiarize prospective respondents with the new TOEFL. The investigation focused on strategies used to respond to more traditional "single selection"…
Descriptors: Courseware, Language Tests, Test Wiseness, Language Teachers
Yamamoto, Kentaro – 1995
The traditional indicator of test speededness, missing responses, clearly indicates a lack of time to respond (thereby indicating the speededness of the test), but it is inadequate for evaluating speededness in a multiple-choice test scored as number correct, and it underestimates test speededness. Conventional item response theory (IRT) parameter…
Descriptors: Ability, Estimation (Mathematics), Item Response Theory, Multiple Choice Tests