Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 2 |
Descriptor
Adaptive Testing | 3 |
Computer Assisted Testing | 3 |
Graduate Study | 3 |
Test Items | 3 |
College Entrance Examinations | 2 |
Item Response Theory | 2 |
Scoring | 2 |
Academic Ability | 1 |
Automation | 1 |
College Applicants | 1 |
Context Effect | 1 |
More ▼ |
Author
Bennett, Randy Elliot | 1 |
Chang, Shu-Ren | 1 |
Davey, Tim | 1 |
Ferdous, Abdullah A. | 1 |
Jacquemin, Daniel | 1 |
Lee, Yi-Hsuan | 1 |
Morley, Mary | 1 |
Plake, Barbara S. | 1 |
Singley, Mark Kevin | 1 |
Steffen, Manfred | 1 |
Publication Type
Journal Articles | 3 |
Reports - Evaluative | 2 |
Reports - Research | 1 |
Education Level
Higher Education | 2 |
Postsecondary Education | 2 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 1 |
What Works Clearinghouse Rating
Davey, Tim; Lee, Yi-Hsuan – ETS Research Report Series, 2011
Both theoretical and practical considerations have led the revision of the Graduate Record Examinations® (GRE®) revised General Test, here called the rGRE, to adopt a multistage adaptive design that will be continuously or nearly continuously administered and that can provide immediate score reporting. These circumstances sharply constrain the…
Descriptors: Context Effect, Scoring, Equated Scores, College Entrance Examinations
Ferdous, Abdullah A.; Plake, Barbara S.; Chang, Shu-Ren – Educational Assessment, 2007
The purpose of this study was to examine the effect of pretest items on response time in an operational, fixed-length, time-limited computerized adaptive test (CAT). These pretest items are embedded within the CAT, but unlike the operational items, are not tailored to the examinee's ability level. If examinees with higher ability levels need less…
Descriptors: Pretests Posttests, Reaction Time, Computer Assisted Testing, Test Items

Bennett, Randy Elliot; Steffen, Manfred; Singley, Mark Kevin; Morley, Mary; Jacquemin, Daniel – Journal of Educational Measurement, 1997
Scoring accuracy and item functioning were studied for an open-ended response type test in which correct answers can take many different surface forms. Results with 1,864 graduate school applicants showed automated scoring to approximate the accuracy of multiple-choice scoring. Items functioned similarly to other item types being considered. (SLD)
Descriptors: Adaptive Testing, Automation, College Applicants, Computer Assisted Testing