Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 3 |
| Since 2007 (last 20 years) | 18 |
Descriptor
Source
Author
| Bridgeman, Brent | 8 |
| Attali, Yigal | 6 |
| Bennett, Randy Elliot | 4 |
| Rock, Donald A. | 4 |
| Schaeffer, Gary A. | 3 |
| Sebrechts, Marc M. | 3 |
| Breyer, F. Jay | 2 |
| Carlson, Sybil B. | 2 |
| Chang, Hua-Hua | 2 |
| Gu, Lixiong | 2 |
| Morley, Mary | 2 |
| More ▼ | |
Publication Type
| Reports - Research | 35 |
| Journal Articles | 28 |
| Reports - Evaluative | 9 |
| Speeches/Meeting Papers | 7 |
| Reports - Descriptive | 3 |
| Numerical/Quantitative Data | 2 |
| Opinion Papers | 1 |
| Tests/Questionnaires | 1 |
Education Level
| Higher Education | 19 |
| Postsecondary Education | 16 |
| Elementary Secondary Education | 1 |
| Secondary Education | 1 |
Audience
| Researchers | 2 |
Location
| China | 2 |
| New Jersey | 2 |
| India | 1 |
| Japan | 1 |
| Louisiana (New Orleans) | 1 |
| Michigan | 1 |
| Pennsylvania | 1 |
| Pennsylvania (Philadelphia) | 1 |
| South Korea | 1 |
| Taiwan | 1 |
| United States | 1 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Quinlan, Thomas; Higgins, Derrick; Wolff, Susanne – Educational Testing Service, 2009
This report evaluates the construct coverage of the e-rater[R[ scoring engine. The matter of construct coverage depends on whether one defines writing skill, in terms of process or product. Originally, the e-rater engine consisted of a large set of components with a proven ability to predict human holistic scores. By organizing these capabilities…
Descriptors: Guides, Writing Skills, Factor Analysis, Writing Tests
Tian, Jian-quan; Miao, Dan-min; Zhu, Xia; Gong, Jing-jing – Online Submission, 2007
Computerized adaptive testing (CAT) has unsurpassable advantages over traditional testing. It has become the mainstream in large scale examinations in modern society. This paper gives a brief introduction to CAT including differences between traditional testing and CAT, the principles of CAT, psychometric theory and computer algorithms of CAT, the…
Descriptors: Foreign Countries, Psychometrics, Adaptive Testing, Computer Assisted Testing
Attali, Yigal; Powers, Don; Hawthorn, John – ETS Research Report Series, 2008
Registered examinees for the GRE® General Test answered open-ended sentence-completion items. For half of the items, participants received immediate feedback on the correctness of their answers and up to two opportunities to revise their answers. A significant feedback-and-revision effect was found. Participants were able to correct many of their…
Descriptors: College Entrance Examinations, Graduate Study, Sentences, Psychometrics
Peer reviewedChang, Hua-Hua; van der Linden, Wim J. – Applied Psychological Measurement, 2003
Developed a method based on 0-1 linear programming to stratify an item pool optimally for use in alpha-stratified adaptive testing. Applied the method to a previous item pool from the computerized adaptive test of the Graduate Record Examinations. Results show the new method performs well in practical situations. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Linear Programming
Peer reviewedChang, Hua-Hua; Qian, Jiahe; Yang, Zhiliang – Applied Psychological Measurement, 2001
Proposed a refinement, based on the stratification of items developed by D. Weiss (1973), of the computerized adaptive testing item selection procedure of H. Chang and Z. Ying (1999). Simulation studies using an item bank from the Graduate Record Examination show the benefits of the new procedure. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Selection, Simulation
Bridgeman, Brent; McBride, Amanda; Monaghan, William – Educational Testing Service, 2004
Imposing time limits on tests can serve a range of important functions. Time limits are essential, for example, if speed of performance is an integral component of what is being measured, as would be the case when testing such skills as how quickly someone can type. Limiting testing time also helps contain expenses associated with test…
Descriptors: Computer Assisted Testing, Timed Tests, Test Results, Aptitude Tests
Peer reviewedPowers, Donald E. – Journal of Educational Computing Research, 2001
Tests the hypothesis that the introduction of computer-adaptive testing may help to alleviate test anxiety and diminish the relationship between test anxiety and test performance. Compares a sample of Graduate Record Examinations (GRE) General Test takers who took the computer-adaptive version of the test with another sample who took the…
Descriptors: Comparative Analysis, Computer Assisted Testing, Nonprint Media, Performance
Peer reviewedSutton, Rosemary E. – Equity & Excellence in Education, 1997
Considers equity issues of highstakes tests conducted by computer, including whether this new form of assessment actually helps level the playing field for students or represents a new cycle of assessment inequality. Two computer tests are assessed: Praxis I: Academic Skills Assessment; and the computerized version of the Graduate Record…
Descriptors: Adaptive Testing, Computer Assisted Testing, Educational Assessment, Educational Testing
Carlson, Sybil B.; Ward, William C. – 1988
Issues concerning the cost and feasibility of using Formulating Hypotheses (FH) test item types for the Graduate Record Examinations have slowed research into their use. This project focused on two major issues that need to be addressed in considering FH items for operational use: the costs of scoring and the assignment of scores along a range of…
Descriptors: Adaptive Testing, Computer Assisted Testing, Costs, Pilot Projects
Peer reviewedVogel, Lora Ann – Journal of Educational Computing Research, 1994
Reports on a study conducted to evaluate how individual differences in anxiety levels affect performance on computer versus paper-and-pencil forms of verbal sections of the Graduate Record Examination. Contrary to the research hypothesis, analysis of scores revealed that extroverted and less computer anxious subjects scored significantly lower on…
Descriptors: Comparative Analysis, Computer Anxiety, Computer Assisted Testing, Computer Attitudes
Mislevy, Robert J.; Almond, Russell G. – 1997
This paper synthesizes ideas from the fields of graphical modeling and education testing, particularly item response theory (IRT) applied to computerized adaptive testing (CAT). Graphical modeling can offer IRT a language for describing multifaceted skills and knowledge, and disentangling evidence from complex performances. IRT-CAT can offer…
Descriptors: Adaptive Testing, Computer Assisted Testing, Educational Testing, Higher Education
Peer reviewedEnright, Mary K.; Rock, Donald A.; Bennett, Randy Elliot – Journal of Educational Measurement, 1998
Examined alternative-item types and section configurations for improving the discriminant and convergent validity of the Graduate Record Examination (GRE) general test using a computer-based test given to 388 examinees who had taken the GRE previously. Adding new variations of logical meaning appeared to decrease discriminant validity. (SLD)
Descriptors: Admission (School), College Entrance Examinations, College Students, Computer Assisted Testing
Slater, Sharon C.; Schaeffer, Gary A. – 1996
The General Computer Adaptive Test (CAT) of the Graduate Record Examinations (GRE) includes three operational sections that are separately timed and scored. A "no score" is reported if the examinee answers fewer than 80% of the items or if the examinee does not answer all of the items and leaves the section before time expires. The 80%…
Descriptors: Adaptive Testing, College Students, Computer Assisted Testing, Equal Education
Kobrin, Jennifer L. – 2000
The comparability of computerized and paper-and-pencil tests was examined from cognitive perspective, using verbal protocols rather than psychometric methods, as the primary mode of inquiry. Reading comprehension items from the Graduate Record Examinations were completed by 48 college juniors and seniors, half of whom took the computerized test…
Descriptors: Cognitive Processes, College Students, Computer Assisted Testing, Higher Education
Peer reviewedBridgeman, Brent; Rock, Donald A. – Journal of Educational Measurement, 1993
Exploratory and confirmatory factor analyses were used to explore relationships among existing item types and three new computer-administered item types for the analytical scale of the Graduate Record Examination General Test. Results with 349 students indicate constructs the item types are measuring. (SLD)
Descriptors: College Entrance Examinations, College Students, Comparative Testing, Computer Assisted Testing


