NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)1
Since 2006 (last 20 years)8
Location
New York1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 65 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bejar, Isaac I.; Deane, Paul D.; Flor, Michael; Chen, Jing – ETS Research Report Series, 2017
The report is the first systematic evaluation of the sentence equivalence item type introduced by the "GRE"® revised General Test. We adopt a validity framework to guide our investigation based on Kane's approach to validation whereby a hierarchy of inferences that should be documented to support score meaning and interpretation is…
Descriptors: College Entrance Examinations, Graduate Study, Generalization, Inferences
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal – ETS Research Report Series, 2014
Previous research on calculator use in standardized assessments of quantitative ability focused on the effect of calculator availability on item difficulty and on whether test developers can predict these effects. With the introduction of an on-screen calculator on the Quantitative Reasoning measure of the "GRE"® revised General Test, it…
Descriptors: College Entrance Examinations, Graduate Study, Calculators, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Albano, Anthony D. – Journal of Educational Measurement, 2013
In many testing programs it is assumed that the context or position in which an item is administered does not have a differential effect on examinee responses to the item. Violations of this assumption may bias item response theory estimates of item and person parameters. This study examines the potentially biasing effects of item position. A…
Descriptors: Test Items, Item Response Theory, Test Format, Questioning Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Daniel, Robert C.; Embretson, Susan E. – Applied Psychological Measurement, 2010
Cognitive complexity level is important for measuring both aptitude and achievement in large-scale testing. Tests for standards-based assessment of mathematics, for example, often include cognitive complexity level in the test blueprint. However, little research exists on how mathematics items can be designed to vary in cognitive complexity level.…
Descriptors: Mathematics Tests, Problem Solving, Test Items, Difficulty Level
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Powers, Don; Freedman, Marshall; Harrison, Marissa; Obetz, Susan – ETS Research Report Series, 2008
This report describes the development, administration, and scoring of open-ended variants of GRE® Subject Test items in biology and psychology. These questions were administered in a Web-based experiment to registered examinees of the respective Subject Tests. The questions required a short answer of 1-3 sentences, and responses were automatically…
Descriptors: College Entrance Examinations, Graduate Study, Scoring, Test Construction
Fremer, John – 1981
Test disclosure legislation in New York State (LaValle Act) has had a major impact on the national testing programs administered by Educational Testing Services (ETS) for various sponsoring organizations. The paper reviews the immediate operational effects of test disclosure in the following areas: (1) increase in number of test forms developed;…
Descriptors: Equated Scores, State Legislation, Test Construction, Testing Programs
Mislevy, Robert J.; Steinberg, Linda S.; Almond, Russell G. – 1999
Tasks are the most visible element in an educational assessment. Their purpose, however, is to provide evidence about targets of inference that cannot be directly seen at all: what examinees know and can do, more broadly conceived than can be observed in the context of any particular set of tasks. This paper concerns issues in an assessment design…
Descriptors: Educational Assessment, Evaluation Methods, Higher Education, Models
Chalifour, Clark; Powers, Donald E. – 1988
In actual test development practice, the number of test items that must be developed and pretested is typically greater, and sometimes much greater, than the number eventually judged suitable for use in operational test forms. This has proven to be especially true for analytical reasoning items, which currently form the bulk of the analytical…
Descriptors: Coding, Difficulty Level, Higher Education, Test Construction
Peer reviewed Peer reviewed
Enright, Mary K.; Morley, Mary; Sheehan, Kathleen M. – Applied Measurement in Education, 2002
Studied the impact of systematic item feature variation on item statistical characteristics and the degree to which such information could be used as collateral information to supplement examinee performance data and reduce pretest sample size by generating 2 families of 48 word problem variants for the Graduate Record Examinations. Results with…
Descriptors: College Entrance Examinations, Sample Size, Statistical Analysis, Test Construction
Peer reviewed Peer reviewed
Dong, Hei-ki – Perceptual and Motor Skills, 1982
A test of basic knowledge in statistics was developed to predict performance in a graduate level statistics course for psychology students. Test scores plus GRE scores were used to predict final exam performance in a graduate statistics course. The devised test predicted performance moderately well. (RD)
Descriptors: Academic Achievement, Graduate Students, Higher Education, Predictive Measurement
Peer reviewed Peer reviewed
Powers, Donald E.; Fowles, Mary E. – Educational Assessment, 1999
Gathered judgments of 253 minority group students and 268 other college students who took the Graduate Record Examination about essay prompts being considered for use in a graduate writing test. Identified several features that underlie examinee perceptions of essay prompts, especially the extent to which prompts allow examinees to draw on their…
Descriptors: College Entrance Examinations, College Students, Essay Tests, Experience
Emmerich, Walter; And Others – 1991
The aim of this research was to identify, develop, and evaluate empirically new reasoning item types that might be used to broaden the analytical measure of the Graduate Record Examinations (GRE) General Test and to strengthen its construct validity. Six item types were selected for empirical evaluation, including the two currently used in the GRE…
Descriptors: Construct Validity, Correlation, Evaluation Methods, Sex Differences
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sheehan, Kathleen M.; Kostin, Irene; Futagi, Yoko – ETS Research Report Series, 2007
This paper explores alternative approaches for facilitating efficient, evidence-centered item development for a new type of verbal reasoning item developed for use on the GRE® General Test. Results obtained in two separate studies are reported. The first study documented the development and validation of a fully automated approach for locating the…
Descriptors: College Entrance Examinations, Graduate Study, Test Items, Item Analysis
Baird, Leonard L. – 1979
This is a report about the first two stages of a four-stage project designed to develop procedures to assess the accomplishments of applicants to graduate school. In the first stage, trial instruments were developed after thoroughly reviewing other attempts at assessing accomplishments and carefully considering the issues involved. Three…
Descriptors: Admission Criteria, College Entrance Examinations, Experiential Learning, Graduate Students
Campbell, Joel T.; Donlon, Thomas F. – 1980
The Figure Location Test (FLT) developed by Donlon, Reilly and McKee to provide a machine-scorable test of the cognitive style called field dependence-field independence, was administered as the experimental section of the October 1976, Graduate Record Examination (GRE) Aptitude Test at about one third of the centers in the United States. Initial…
Descriptors: Aptitude Tests, Data Analysis, Field Dependence Independence, Graduate Students
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5