Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 5 |
Descriptor
Item Analysis | 22 |
College Entrance Examinations | 18 |
Test Items | 14 |
Graduate Study | 13 |
Higher Education | 13 |
Difficulty Level | 7 |
Verbal Tests | 7 |
Test Bias | 6 |
Test Construction | 6 |
Latent Trait Theory | 5 |
Psychometrics | 5 |
More ▼ |
Source
ETS Research Report Series | 4 |
Journal of Educational… | 3 |
Applied Psychological… | 2 |
Journal of Technology,… | 1 |
Author
Publication Type
Reports - Research | 19 |
Journal Articles | 10 |
Speeches/Meeting Papers | 3 |
Collected Works - Serials | 1 |
Reports - Descriptive | 1 |
Reports - Evaluative | 1 |
Education Level
Higher Education | 6 |
Postsecondary Education | 5 |
Audience
Researchers | 4 |
Location
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 22 |
Sentence Completion Test | 1 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Attali, Yigal – ETS Research Report Series, 2014
Previous research on calculator use in standardized assessments of quantitative ability focused on the effect of calculator availability on item difficulty and on whether test developers can predict these effects. With the introduction of an on-screen calculator on the Quantitative Reasoning measure of the "GRE"® revised General Test, it…
Descriptors: College Entrance Examinations, Graduate Study, Calculators, Test Items
Albano, Anthony D. – Journal of Educational Measurement, 2013
In many testing programs it is assumed that the context or position in which an item is administered does not have a differential effect on examinee responses to the item. Violations of this assumption may bias item response theory estimates of item and person parameters. This study examines the potentially biasing effects of item position. A…
Descriptors: Test Items, Item Response Theory, Test Format, Questioning Techniques
Attali, Yigal; Bridgeman, Brent; Trapani, Catherine – Journal of Technology, Learning, and Assessment, 2010
A generic approach in automated essay scoring produces scores that have the same meaning across all prompts, existing or new, of a writing assessment. This is accomplished by using a single set of linguistic indicators (or features), a consistent way of combining and weighting these features into essay scores, and a focus on features that are not…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Test Scoring Machines
Sheehan, Kathleen M.; Kostin, Irene; Futagi, Yoko – ETS Research Report Series, 2007
This paper explores alternative approaches for facilitating efficient, evidence-centered item development for a new type of verbal reasoning item developed for use on the GRE® General Test. Results obtained in two separate studies are reported. The first study documented the development and validation of a fully automated approach for locating the…
Descriptors: College Entrance Examinations, Graduate Study, Test Items, Item Analysis

Kingston, Neal M. – 1985
This research investigated the effect on estimated lower asymptotes of the instructions to Graduate Record Examination (GRE) examinees about how the test would be scored. This effect was assessed for four different verbal item types (analogies, antonyms, sentence completion, and reading comprehension) using a two-way, unweighted means analysis of…
Descriptors: Analysis of Variance, College Entrance Examinations, Guessing (Tests), Higher Education

Chalifour, Clark L.; Powers, Donald E. – Journal of Educational Measurement, 1989
Content characteristics of 1,400 Graduate Record Examination (GRE) analytical reasoning items were coded for item difficulty and discrimination. The results provide content characteristics for consideration in extending specifications for analytical reasoning items and a better understanding of the construct validity of these items. (TJH)
Descriptors: College Entrance Examinations, Construct Validity, Content Analysis, Difficulty Level
Kingston, Neal M.; McKinley, Robert L. – 1988
Confirmatory multidimensional item response theory (CMIRT) was used to assess the structure of the Graduate Record Examination General Test, about which much information about factorial structure exists, using a sample of 1,001 psychology majors taking the test in 1984 or 1985. Results supported previous findings that, for this population, there…
Descriptors: College Students, Factor Analysis, Higher Education, Item Analysis
Powers, Donald E.; And Others – 1978
Much of the effort involved in a major restructuring of the Graduate Record Examinations (GRE) Aptitude Test was intended to result in the creation of an analytical module to supplement the verbal and quantitative sections of the test, thus providing broadened measurement. Factor extension analysis was used in the present study to investigate…
Descriptors: College Entrance Examinations, Factor Analysis, Factor Structure, Graduate Study
Flaugher, Ronald L. – 1972
There are many potential sources of test bias besides that of the particular item content within the test. The other potential sources, designated here as program and utilization must also be encompassed in any thorough and effective program to increase the accuracy of assessment for members of ethnic minorities. As usual, the research findings…
Descriptors: Achievement Tests, Aptitude Tests, Graduate Study, Item Analysis

Dorans, Neil J.; Kingston, Neal M. – Journal of Educational Measurement, 1985
Since The Graduate Record Examination-Verbal measures two factors (reading comprehension and discrete verbal ability), the unidimensionality of item response theory is violated. The impact of this violation was examined by comparing three ability estimates: reading, discrete, and all verbal. Both dimensions were highly correlated; the impact was…
Descriptors: College Entrance Examinations, Factor Structure, Graduate Study, Higher Education

Bennett, Randy Elliot; And Others – 1986
The psychometric characteristics of the Graduate Record Examinations General Test (GRE-GT) were studied for three handicapped groups. Experimental subjects took the GRE-GT between October 1981 and June 1984; they include: (1) 151 visually-impaired students taking large-type, extended-time administrations; (2) 188 visually-impaired students taking…
Descriptors: College Entrance Examinations, Comparative Analysis, Graduate Study, Higher Education

Donlon, Thomas F.; And Others – Applied Psychological Measurement, 1980
The scope and nature of sex differences in the Graduate Record Examination are explored by identifying individual test items that differ from the other items in terms of the magnitude of the difference in item difficulty for the sexes. In general, limited evidence of differences was established. (Author/CTM)
Descriptors: Aptitude Tests, College Entrance Examinations, Graduate Students, Higher Education
Gorin, Joanna S.; Embretson, Susan E. – Applied Psychological Measurement, 2006
Recent assessment research joining cognitive psychology and psychometric theory has introduced a new technology, item generation. In algorithmic item generation, items are systematically created based on specific combinations of features that underlie the processing required to correctly solve a problem. Reading comprehension items have been more…
Descriptors: Difficulty Level, Test Items, Modeling (Psychology), Paragraph Composition
Sinharay, Sandip; Johnson, Matthew – ETS Research Report Series, 2005
"Item models" (LaDuca, Staples, Templeton, & Holzman, 1986) are classes from which it is possible to generate/produce items that are equivalent/isomorphic to other items from the same model (e.g., Bejar, 1996; Bejar, 2002). They have the potential to produce large number of high-quality items at reduced cost. This paper introduces…
Descriptors: Item Analysis, Test Items, Scoring, Psychometrics
Wilson, Kenneth M. – 1985
This Graduate Record Examination (GRE) study assesses: (1) the relative contribution of a vocabulary score (consisting of GRE General Test antonyms and analogies) and a reading comprehension score (consisting of GRE sentence completion and reading comprehension sets) to the prediction of self-reported undergraduate grade point average (GPA); and…
Descriptors: College Entrance Examinations, Grade Point Average, Higher Education, Item Analysis
Previous Page | Next Page »
Pages: 1 | 2