Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 20 |
Descriptor
Source
Author
Bridgeman, Brent | 7 |
Kostin, Irene | 7 |
Powers, Donald E. | 7 |
Freedle, Roy | 5 |
Bennett, Randy Elliot | 4 |
Rock, Donald A. | 4 |
Attali, Yigal | 3 |
Enright, Mary K. | 3 |
Morley, Mary | 3 |
Scheuneman, Janice Dowd | 3 |
Sheehan, Kathleen M. | 3 |
More ▼ |
Publication Type
Reports - Research | 61 |
Journal Articles | 41 |
Reports - Evaluative | 12 |
Speeches/Meeting Papers | 5 |
Tests/Questionnaires | 3 |
Numerical/Quantitative Data | 2 |
Information Analyses | 1 |
Reports - Descriptive | 1 |
Education Level
Higher Education | 20 |
Postsecondary Education | 16 |
Junior High Schools | 1 |
Middle Schools | 1 |
Secondary Education | 1 |
Audience
Researchers | 4 |
Location
New Jersey | 2 |
Illinois | 1 |
Louisiana (New Orleans) | 1 |
Michigan | 1 |
New York | 1 |
Pennsylvania | 1 |
Pennsylvania (Philadelphia) | 1 |
Taiwan | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 74 |
SAT (College Admission Test) | 7 |
ACT Assessment | 1 |
Preliminary Scholastic… | 1 |
Sentence Completion Test | 1 |
Test of English as a Foreign… | 1 |
Wechsler Adult Intelligence… | 1 |
What Works Clearinghouse Rating
van Rijn, Peter W.; Attali, Yigal; Ali, Usama S. – Journal of Experimental Education, 2023
We investigated whether and to what extent different scoring instructions, timing conditions, and direct feedback affect performance and speed. An experimental study manipulating these factors was designed to address these research questions. According to the factorial design, participants were randomly assigned to one of twelve study conditions.…
Descriptors: Scoring, Time, Feedback (Response), Performance
Bridgeman, Brent – Educational Measurement: Issues and Practice, 2016
Scores on essay-based assessments that are part of standardized admissions tests are typically given relatively little weight in admissions decisions compared to the weight given to scores from multiple-choice assessments. Evidence is presented to suggest that more weight should be given to these assessments. The reliability of the writing scores…
Descriptors: Multiple Choice Tests, Scores, Standardized Tests, Comparative Analysis
Oliveri, Maria Elena; Lawless, Rene; Robin, Frederic; Bridgeman, Brent – Applied Measurement in Education, 2018
We analyzed a pool of items from an admissions test for differential item functioning (DIF) for groups based on age, socioeconomic status, citizenship, or English language status using Mantel-Haenszel and item response theory. DIF items were systematically examined to identify its possible sources by item type, content, and wording. DIF was…
Descriptors: Test Bias, Comparative Analysis, Item Banks, Item Response Theory
Bejar, Isaac I.; Deane, Paul D.; Flor, Michael; Chen, Jing – ETS Research Report Series, 2017
The report is the first systematic evaluation of the sentence equivalence item type introduced by the "GRE"® revised General Test. We adopt a validity framework to guide our investigation based on Kane's approach to validation whereby a hierarchy of inferences that should be documented to support score meaning and interpretation is…
Descriptors: College Entrance Examinations, Graduate Study, Generalization, Inferences
Swiggett, Wanda D.; Kotloff, Laurie; Ezzo, Chelsea; Adler, Rachel; Oliveri, Maria Elena – ETS Research Report Series, 2014
The computer-based "Graduate Record Examinations"® ("GRE"®) revised General Test includes interactive item types and testing environment tools (e.g., test navigation, on-screen calculator, and help). How well do test takers understand these innovations? If test takers do not understand the new item types, these innovations may…
Descriptors: College Entrance Examinations, Graduate Study, Usability, Test Items
Attali, Yigal – ETS Research Report Series, 2014
Previous research on calculator use in standardized assessments of quantitative ability focused on the effect of calculator availability on item difficulty and on whether test developers can predict these effects. With the introduction of an on-screen calculator on the Quantitative Reasoning measure of the "GRE"® revised General Test, it…
Descriptors: College Entrance Examinations, Graduate Study, Calculators, Test Items
Albano, Anthony D. – Journal of Educational Measurement, 2013
In many testing programs it is assumed that the context or position in which an item is administered does not have a differential effect on examinee responses to the item. Violations of this assumption may bias item response theory estimates of item and person parameters. This study examines the potentially biasing effects of item position. A…
Descriptors: Test Items, Item Response Theory, Test Format, Questioning Techniques
Dorans, Neil J. – Educational Measurement: Issues and Practice, 2012
Views on testing--its purpose and uses and how its data are analyzed--are related to one's perspective on test takers. Test takers can be viewed as learners, examinees, or contestants. I briefly discuss the perspective of test takers as learners. I maintain that much of psychometrics views test takers as examinees. I discuss test takers as a…
Descriptors: Testing, Test Theory, Item Response Theory, Test Reliability
Huang, Hung-Yu; Wang, Wen-Chung – Educational and Psychological Measurement, 2013
Both testlet design and hierarchical latent traits are fairly common in educational and psychological measurements. This study aimed to develop a new class of higher order testlet response models that consider both local item dependence within testlets and a hierarchy of latent traits. Due to high dimensionality, the authors adopted the Bayesian…
Descriptors: Item Response Theory, Models, Bayesian Statistics, Computation
Davey, Tim; Lee, Yi-Hsuan – ETS Research Report Series, 2011
Both theoretical and practical considerations have led the revision of the Graduate Record Examinations® (GRE®) revised General Test, here called the rGRE, to adopt a multistage adaptive design that will be continuously or nearly continuously administered and that can provide immediate score reporting. These circumstances sharply constrain the…
Descriptors: Context Effect, Scoring, Equated Scores, College Entrance Examinations
Daniel, Robert C.; Embretson, Susan E. – Applied Psychological Measurement, 2010
Cognitive complexity level is important for measuring both aptitude and achievement in large-scale testing. Tests for standards-based assessment of mathematics, for example, often include cognitive complexity level in the test blueprint. However, little research exists on how mathematics items can be designed to vary in cognitive complexity level.…
Descriptors: Mathematics Tests, Problem Solving, Test Items, Difficulty Level
Toppino, Thomas C.; Cohen, Michael S.; Davis, Meghan L.; Moors, Amy C. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2009
The authors clarify the source of a conflict between previous findings related to metacognitive control over the distribution of practice. In a study by L. Son (2004), learners were initially presented pairs of Graduate Record Examination (GRE) vocabulary words and their common synonyms for 1 s, after which they chose to study the pair again…
Descriptors: Metacognition, Vocabulary, Difficulty Level, Test Items
Sinharay, Sandip; Johnson, Matthew S. – International Journal of Testing, 2008
"Item models" (LaDuca, Staples, Templeton, & Holzman, 1986) are classes from which it is possible to generate items that are equivalent/isomorphic to other items from the same model (e.g., Bejar, 1996, 2002). They have the potential to produce large numbers of high-quality items at reduced cost. This article introduces data from an…
Descriptors: College Entrance Examinations, Case Studies, Test Items, Models
Bridgeman, Brent; Cline, Frederick; Levin, Jutta – ETS Research Report Series, 2008
In order to estimate the likely effects on item difficulty when a calculator becomes available on the quantitative section of the Graduate Record Examinations® (GRE®-Q), 168 items (in six 28-item forms) were administered either with or without access to an on-screen four-function calculator. The forms were administered as a special research…
Descriptors: College Entrance Examinations, Graduate Study, Calculators, Test Items
Attali, Yigal; Powers, Don; Freedman, Marshall; Harrison, Marissa; Obetz, Susan – ETS Research Report Series, 2008
This report describes the development, administration, and scoring of open-ended variants of GRE® Subject Test items in biology and psychology. These questions were administered in a Web-based experiment to registered examinees of the respective Subject Tests. The questions required a short answer of 1-3 sentences, and responses were automatically…
Descriptors: College Entrance Examinations, Graduate Study, Scoring, Test Construction