NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20260
Since 20250
Since 2022 (last 5 years)0
Since 2017 (last 10 years)3
Since 2007 (last 20 years)18
Audience
Researchers2
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 31 to 45 of 47 results Save | Export
Peer reviewed Peer reviewed
Bennett, Randy Elliot; Morley, Mary; Quardt, Dennis; Rock, Donald A.; Singley, Mark K.; Katz, Irvin R.; Nhouyvanisvong, Adisack – Journal of Educational Measurement, 1999
Evaluated a computer-delivered response type for measuring quantitative skill, the "Generating Examples" (GE) response type, which presents under-determined problems that can have many right answers. Results from 257 graduate students and applicants indicate that GE scores are reasonably reliable, but only moderately related to Graduate…
Descriptors: College Applicants, Computer Assisted Testing, Graduate Students, Graduate Study
Bridgeman, Brent; Rock, Donald A. – 1993
Three new computer-administered item types for the analytical scale of the Graduate Record Examination (GRE) General Test were developed and evaluated. One item type was a free-response version of the current analytical reasoning item type. The second item type was a somewhat constrained free-response version of the pattern identification (or…
Descriptors: Adaptive Testing, College Entrance Examinations, College Students, Computer Assisted Testing
Schnipke, Deborah L. – 1995
Time limits on tests often prevent some examinees from finishing all of the items on the test; the extent of this effect has been called the "speededness" of the test. Traditional speededness indices focus on the number of unreached items. Other examinees in the same situation rapidly fill in answers in the hope of getting some of the…
Descriptors: Computer Assisted Testing, Educational Assessment, Evaluation Methods, Guessing (Tests)
Powers, Donald E.; Potenza, Maria T. – 1996
The degree to which laptop and standard-size desktop computers are likely to produce comparable test results for the Graduate Record Examination (GRE) General Test was studied. Verbal, quantitative, and writing sections of a retired version of the GRE were used, since it was expected that performance on reading passages or mathematics items might…
Descriptors: College Students, Comparative Analysis, Computer Assisted Testing, Higher Education
Schaeffer, Gary A.; And Others – 1995
This report summarizes the results from two studies. The first assessed the comparability of scores derived from linear computer-based (CBT) and computer adaptive (CAT) versions of the three Graduate Record Examinations (GRE) General Test measures. A verbal CAT was taken by 1,507, a quantitative CAT by 1,354, and an analytical CAT by 995…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Equated Scores
Schaeffer, Gary A.; And Others – 1993
This report contains results of a field test conducted to determine the relationship between a Graduate Records Examination (GRE) linear computer-based test (CBT) and a paper-and-pencil (P&P) test with the same items. Recent GRE examinees participated in the field test by taking either a CBT or the P&P test. Data from the field test…
Descriptors: Attitudes, College Graduates, Computer Assisted Testing, Equated Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Morley, Mary; Bridgeman, Brent; Lawless, René – ETS Research Report Series, 2004
This study investigated the transfer of solution strategies between close variants of quantitative reasoning questions. Pre- and posttests were obtained from 406 college undergraduates, all of whom took the same posttest; pretests varied such that one group of participants saw close variants of one set of posttest items while other groups saw…
Descriptors: Test Items, Mathematics Tests, Problem Solving, Pretests Posttests
Sebrechts, Marc M.; And Others – 1991
This study evaluated agreement between expert system and human scores on 12 algebra word problems taken by Graduate Record Examinations (GRE) General Test examinees from a general sample of 285 and a study sample of 30. Problems were drawn from three content classes (rate x time, work, and interest) and presented in four constructed-response…
Descriptors: Algebra, Automation, College Students, Computer Assisted Testing
Bennett, Randy Elliot; Sebrechts, Marc M. – 1994
This study evaluated expert system diagnoses of examinees' solutions to complex constructed-response algebra word problems. Problems were presented to three samples (30 college students each), each of which had taken the Graduate Record Examinations General Test. One sample took the problems in paper-and-pencil form and the other two on computer.…
Descriptors: Algebra, Automation, Classification, College Entrance Examinations
Eignor, Daniel R.; And Others – 1993
The extensive computer simulation work done in developing the computer adaptive versions of the Graduate Record Examinations (GRE) Board General Test and the College Board Admissions Testing Program (ATP) Scholastic Aptitude Test (SAT) is described in this report. Both the GRE General and SAT computer adaptive tests (CATs), which are fixed length…
Descriptors: Adaptive Testing, Algorithms, Case Studies, College Entrance Examinations
Lin, Miao-Hsiang – 1986
Specific questions addressed in this study include how time limits affect a test's construct and predictive validities, how time limits affect an examinee's time allocation and test performance, and whether the assumption about how examinees answer items is valid. Interactions involving an examinee's sex and age are studied. Two parallel forms of…
Descriptors: Age Differences, Computer Assisted Testing, Construct Validity, Difficulty Level
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gu, Lixiong; Drake, Samuel; Wolfe, Edward W. – Journal of Technology, Learning, and Assessment, 2006
This study seeks to determine whether item features are related to observed differences in item difficulty (DIF) between computer- and paper-based test delivery media. Examinees responded to 60 quantitative items similar to those found on the GRE general test in either a computer-based or paper-based medium. Thirty-eight percent of the items were…
Descriptors: Test Bias, Test Items, Educational Testing, Student Evaluation
Bennett, Randy Elliot; And Others – 1995
Two computer-based categorization tasks were developed and pilot tested. In study 1, the task asked examinees to sort mathematical word problem stems according to prototypes. Results with 9 faculty members and 107 undergraduates showed that those who sorted well tended to have higher Graduate Record Examination General Test scores and college…
Descriptors: Admission (School), Classification, College Entrance Examinations, College Faculty
Peer reviewed Peer reviewed
Bridgeman, Brent – Journal of Educational Measurement, 1992
Examinees in a regular administration of the quantitative portion of the Graduate Record Examination responded to particular items in a machine-scannable multiple-choice format. Volunteers (n=364) used a computer to answer open-ended counterparts of these items. Scores for both formats demonstrated similar correlational patterns. (SLD)
Descriptors: Answer Sheets, College Entrance Examinations, College Students, Comparative Testing
Carlson, Sybil B.; Camp, Roberta – 1985
This paper reports on Educational Testing Service research studies investigating the parameters critical to reliability and validity in both the direct and indirect writing ability assessment of higher education applicants. The studies involved: (1) formulating an operational definition of writing competence; (2) designing and pretesting writing…
Descriptors: College Entrance Examinations, Computer Assisted Testing, English (Second Language), Essay Tests
Pages: 1  |  2  |  3  |  4