Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 3 |
Descriptor
Difficulty Level | 5 |
Item Response Theory | 5 |
Writing Tests | 5 |
Test Items | 3 |
Evaluation Criteria | 2 |
Evaluators | 2 |
Language Tests | 2 |
Rating Scales | 2 |
Regression (Statistics) | 2 |
Testing Programs | 2 |
Academic Achievement | 1 |
More ▼ |
Author
Broer, Markus | 1 |
Brunfaut, Tineke | 1 |
Engelhard, George, Jr. | 1 |
Goodman, Joshua | 1 |
Gyagenda, Ismail S. | 1 |
Harsch, Claudia | 1 |
Lee, Yong-Won | 1 |
Lestari, Santi B. | 1 |
Meyers, Jason L. | 1 |
Murphy, Stephen | 1 |
Powers, Don | 1 |
More ▼ |
Publication Type
Reports - Research | 5 |
Journal Articles | 3 |
Speeches/Meeting Papers | 2 |
Numerical/Quantitative Data | 1 |
Education Level
Elementary Secondary Education | 1 |
Grade 10 | 1 |
Grade 8 | 1 |
Grade 9 | 1 |
Higher Education | 1 |
Postsecondary Education | 1 |
Secondary Education | 1 |
Audience
Location
Germany | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 1 |
What Works Clearinghouse Rating
Lestari, Santi B.; Brunfaut, Tineke – Language Testing, 2023
Assessing integrated reading-into-writing task performances is known to be challenging, and analytic rating scales have been found to better facilitate the scoring of these performances than other common types of rating scales. However, little is known about how specific operationalizations of the reading-into-writing construct in analytic rating…
Descriptors: Reading Writing Relationship, Writing Tests, Rating Scales, Writing Processes
Harsch, Claudia; Rupp, Andre Alexander – Language Assessment Quarterly, 2011
The "Common European Framework of Reference" (CEFR; Council of Europe, 2001) provides a competency model that is increasingly used as a point of reference to compare language examinations. Nevertheless, aligning examinations to the CEFR proficiency levels remains a challenge. In this article, we propose a new, level-centered approach to…
Descriptors: Language Tests, Writing Tests, Test Construction, Test Items
Meyers, Jason L.; Murphy, Stephen; Goodman, Joshua; Turhan, Ahmet – Pearson, 2012
Operational testing programs employing item response theory (IRT) applications benefit from of the property of item parameter invariance whereby item parameter estimates obtained from one sample can be applied to other samples (when the underlying assumptions are satisfied). In theory, this feature allows for applications such as computer-adaptive…
Descriptors: Equated Scores, Test Items, Test Format, Item Response Theory
Gyagenda, Ismail S.; Engelhard, George, Jr. – 1998
The purpose of this study was to describe the Rasch model for measurement and apply the model to examine the relationship between raters, domains of written compositions, and student writing ability. Twenty raters were randomly selected from a group of 87 operational raters contracted to rate essays as part of the 1993 field test of the Georgia…
Descriptors: Difficulty Level, Essay Tests, Evaluators, High School Students
Broer, Markus; Lee, Yong-Won; Rizavi, Saba; Powers, Don – ETS Research Report Series, 2005
Three polytomous DIF detection techniques--the Mantel test, logistic regression, and polySTAND--were used to identify GREĀ® Analytical Writing prompts ("Issue" and "Argument") that are differentially difficult for (a) female test takers; (b) African American, Asian, and Hispanic test takers; and (c) test takers whose strongest…
Descriptors: Culture Fair Tests, Item Response Theory, Test Items, Cues