Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 3 |
Descriptor
Author
Bridgeman, Brent | 2 |
Albano, Anthony D. | 1 |
Attali, Yigal | 1 |
Austin, Neale W. | 1 |
Breyer, F. Jay | 1 |
Rupp, André A. | 1 |
Schnipke, Deborah L. | 1 |
Trapani, Catherine | 1 |
Publication Type
Journal Articles | 3 |
Reports - Research | 3 |
Reports - Evaluative | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Higher Education | 3 |
Postsecondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 5 |
Advanced Placement… | 1 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Breyer, F. Jay; Rupp, André A.; Bridgeman, Brent – ETS Research Report Series, 2017
In this research report, we present an empirical argument for the use of a contributory scoring approach for the 2-essay writing assessment of the analytical writing section of the "GRE"® test in which human and machine scores are combined for score creation at the task and section levels. The approach was designed to replace a currently…
Descriptors: College Entrance Examinations, Scoring, Essay Tests, Writing Evaluation
Albano, Anthony D. – Journal of Educational Measurement, 2013
In many testing programs it is assumed that the context or position in which an item is administered does not have a differential effect on examinee responses to the item. Violations of this assumption may bias item response theory estimates of item and person parameters. This study examines the potentially biasing effects of item position. A…
Descriptors: Test Items, Item Response Theory, Test Format, Questioning Techniques
Attali, Yigal; Bridgeman, Brent; Trapani, Catherine – Journal of Technology, Learning, and Assessment, 2010
A generic approach in automated essay scoring produces scores that have the same meaning across all prompts, existing or new, of a writing assessment. This is accomplished by using a single set of linguistic indicators (or features), a consistent way of combining and weighting these features into essay scores, and a focus on features that are not…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Test Scoring Machines
Austin, Neale W. – 1969
Emphasized in this speech are the innovative practices in the standardized foreign language testing programs sponsored by the College Entrance Examination Board (CEEB) and the Modern Language Association (MLA). The CEEB projected listening and reading "composite tests" and changes in the French and Latin Advanced Placement Tests are…
Descriptors: Achievement Tests, Advanced Placement, College Students, Educational Innovation
Schnipke, Deborah L. – 1995
Time limits on tests often prevent some examinees from finishing all of the items on the test; the extent of this effect has been called the "speededness" of the test. Traditional speededness indices focus on the number of unreached items. Other examinees in the same situation rapidly fill in answers in the hope of getting some of the…
Descriptors: Computer Assisted Testing, Educational Assessment, Evaluation Methods, Guessing (Tests)