Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 2 |
Descriptor
Evaluation Methods | 3 |
Models | 3 |
College Entrance Examinations | 2 |
Test Construction | 2 |
Automation | 1 |
Correlation | 1 |
Data | 1 |
Demography | 1 |
Educational Assessment | 1 |
Essays | 1 |
Evaluation Research | 1 |
More ▼ |
Author
Albano, Anthony D. | 1 |
Almond, Russell G. | 1 |
Bridgeman, Brent | 1 |
Davey, Tim | 1 |
Mislevy, Robert J. | 1 |
Ramineni, Chaitanya | 1 |
Steinberg, Linda S. | 1 |
Trapani, Catherine S. | 1 |
Williamson, David M. | 1 |
Publication Type
Journal Articles | 2 |
Reports - Research | 2 |
Reports - Descriptive | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Higher Education | 2 |
Postsecondary Education | 2 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 3 |
What Works Clearinghouse Rating
Albano, Anthony D. – Journal of Educational Measurement, 2013
In many testing programs it is assumed that the context or position in which an item is administered does not have a differential effect on examinee responses to the item. Violations of this assumption may bias item response theory estimates of item and person parameters. This study examines the potentially biasing effects of item position. A…
Descriptors: Test Items, Item Response Theory, Test Format, Questioning Techniques
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M.; Davey, Tim; Bridgeman, Brent – ETS Research Report Series, 2012
Automated scoring models for the "e-rater"® scoring engine were built and evaluated for the "GRE"® argument and issue-writing tasks. Prompt-specific, generic, and generic with prompt-specific intercept scoring models were built and evaluation statistics such as weighted kappas, Pearson correlations, standardized difference in…
Descriptors: Scoring, Test Scoring Machines, Automation, Models
Mislevy, Robert J.; Steinberg, Linda S.; Almond, Russell G. – 1999
Tasks are the most visible element in an educational assessment. Their purpose, however, is to provide evidence about targets of inference that cannot be directly seen at all: what examinees know and can do, more broadly conceived than can be observed in the context of any particular set of tasks. This paper concerns issues in an assessment design…
Descriptors: Educational Assessment, Evaluation Methods, Higher Education, Models