Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 3 |
Descriptor
Comparative Analysis | 7 |
Computer Assisted Testing | 7 |
College Students | 3 |
Higher Education | 3 |
Test Anxiety | 3 |
Test Construction | 3 |
Testing | 3 |
Writing Tests | 3 |
College Entrance Examinations | 2 |
Essays | 2 |
Graduate Study | 2 |
More ▼ |
Source
Journal of Educational… | 2 |
Applied Measurement in… | 1 |
ETS Research Report Series | 1 |
Journal of Psychoeducational… | 1 |
Author
Publication Type
Reports - Research | 6 |
Journal Articles | 5 |
Reports - Evaluative | 1 |
Education Level
Higher Education | 3 |
Postsecondary Education | 2 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 7 |
Flesch Kincaid Grade Level… | 1 |
Nelson Denny Reading Tests | 1 |
Woodcock Johnson Tests of… | 1 |
What Works Clearinghouse Rating
Buzick, Heather; Oliveri, Maria Elena; Attali, Yigal; Flor, Michael – Applied Measurement in Education, 2016
Automated essay scoring is a developing technology that can provide efficient scoring of large numbers of written responses. Its use in higher education admissions testing provides an opportunity to collect validity and fairness evidence to support current uses and inform its emergence in other areas such as K-12 large-scale assessment. In this…
Descriptors: Essays, Learning Disabilities, Attention Deficit Hyperactivity Disorder, Scoring
Attali, Yigal; Sinharay, Sandip – ETS Research Report Series, 2015
The "e-rater"® automated essay scoring system is used operationally in the scoring of the argument and issue tasks that form the Analytical Writing measure of the "GRE"® General Test. For each of these tasks, this study explored the value added of reporting 4 trait scores for each of these 2 tasks over the total e-rater score.…
Descriptors: Scores, Computer Assisted Testing, Computer Software, Grammar
Lewandowski, Lawrence; Gathje, Rebecca A.; Lovett, Benjamin J.; Gordon, Michael – Journal of Psychoeducational Assessment, 2013
College students with attention deficit hyperactivity disorder (ADHD) often request and receive extended time to complete high-stakes exams and classroom tests. This study examined the performances and behaviors of college students on computerized simulations of high-stakes exams. Thirty-five college students with ADHD were compared to 185 typical…
Descriptors: Attention Deficit Disorders, Comparative Analysis, Testing, Vocabulary

Powers, Donald E. – Journal of Educational Computing Research, 2001
Tests the hypothesis that the introduction of computer-adaptive testing may help to alleviate test anxiety and diminish the relationship between test anxiety and test performance. Compares a sample of Graduate Record Examinations (GRE) General Test takers who took the computer-adaptive version of the test with another sample who took the…
Descriptors: Comparative Analysis, Computer Assisted Testing, Nonprint Media, Performance

Vogel, Lora Ann – Journal of Educational Computing Research, 1994
Reports on a study conducted to evaluate how individual differences in anxiety levels affect performance on computer versus paper-and-pencil forms of verbal sections of the Graduate Record Examination. Contrary to the research hypothesis, analysis of scores revealed that extroverted and less computer anxious subjects scored significantly lower on…
Descriptors: Comparative Analysis, Computer Anxiety, Computer Assisted Testing, Computer Attitudes
Powers, Donald E.; Potenza, Maria T. – 1996
The degree to which laptop and standard-size desktop computers are likely to produce comparable test results for the Graduate Record Examination (GRE) General Test was studied. Verbal, quantitative, and writing sections of a retired version of the GRE were used, since it was expected that performance on reading passages or mathematics items might…
Descriptors: College Students, Comparative Analysis, Computer Assisted Testing, Higher Education
Schaeffer, Gary A.; And Others – 1995
This report summarizes the results from two studies. The first assessed the comparability of scores derived from linear computer-based (CBT) and computer adaptive (CAT) versions of the three Graduate Record Examinations (GRE) General Test measures. A verbal CAT was taken by 1,507, a quantitative CAT by 1,354, and an analytical CAT by 995…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Equated Scores