Publication Date
In 2025 | 1 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 4 |
Descriptor
Adaptive Testing | 10 |
Context Effect | 10 |
Computer Assisted Testing | 7 |
Test Items | 6 |
Item Response Theory | 4 |
Computation | 3 |
Item Banks | 3 |
Case Studies | 2 |
Equated Scores | 2 |
High Stakes Tests | 2 |
Measurement Techniques | 2 |
More ▼ |
Source
Author
Davey, Tim | 3 |
Herbert, Erin | 2 |
Rizavi, Saba | 2 |
Way, Walter D. | 2 |
Boodoo, Gwyneth M. | 1 |
Brennan, Robert L. | 1 |
Brown, Anna | 1 |
Deborah J. Harris | 1 |
Jiang, Yuhong | 1 |
Kiely, Gerard L. | 1 |
Kireyev, Kirill | 1 |
More ▼ |
Publication Type
Journal Articles | 9 |
Reports - Research | 6 |
Reports - Evaluative | 2 |
Speeches/Meeting Papers | 2 |
Information Analyses | 1 |
Opinion Papers | 1 |
Reports - Descriptive | 1 |
Education Level
Elementary Education | 1 |
Higher Education | 1 |
Middle Schools | 1 |
Postsecondary Education | 1 |
Audience
Location
United Kingdom | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 1 |
National Assessment of… | 1 |
What Works Clearinghouse Rating
Ye Ma; Deborah J. Harris – Educational Measurement: Issues and Practice, 2025
Item position effect (IPE) refers to situations where an item performs differently when it is administered in different positions on a test. The majority of previous research studies have focused on investigating IPE under linear testing. There is a lack of IPE research under adaptive testing. In addition, the existence of IPE might violate Item…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Test Items
Lin, Yin; Brown, Anna – Educational and Psychological Measurement, 2017
A fundamental assumption in computerized adaptive testing is that item parameters are invariant with respect to context--items surrounding the administered item. This assumption, however, may not hold in forced-choice (FC) assessments, where explicit comparisons are made between items included in the same block. We empirically examined the…
Descriptors: Personality Measures, Measurement Techniques, Context Effect, Test Items
Landauer, Thomas K.; Kireyev, Kirill; Panaccione, Charles – Scientific Studies of Reading, 2011
A new metric, Word Maturity, estimates the development by individual students of knowledge of every word in a large corpus. The metric is constructed by Latent Semantic Analysis modeling of word knowledge as a function of the reading that a simulated learner has done and is calibrated by its developing closeness in information content to that of a…
Descriptors: Reading Research, Vocabulary Development, Semantics, Statistical Analysis
Davey, Tim; Lee, Yi-Hsuan – ETS Research Report Series, 2011
Both theoretical and practical considerations have led the revision of the Graduate Record Examinations® (GRE®) revised General Test, here called the rGRE, to adopt a multistage adaptive design that will be continuously or nearly continuously administered and that can provide immediate score reporting. These circumstances sharply constrain the…
Descriptors: Context Effect, Scoring, Equated Scores, College Entrance Examinations
Jiang, Yuhong; Song, Joo-Hyun – Journal of Experimental Psychology: Human Perception and Performance, 2005
Humans conduct visual search faster when the same display is presented for a 2nd time, showing implicit learning of repeated displays. This study examines whether learning of a spatial layout transfers to other layouts that are occupied by items of new shapes or colors. The authors show that spatial context learning is sometimes contingent on item…
Descriptors: Spatial Ability, Visual Perception, Visual Learning, Adaptive Testing

Wainer, Howard; Kiely, Gerard L. – Journal of Educational Measurement, 1987
The testlet, a bundle of test items, alleviates some problems associated with computerized adaptive testing: context effects, lack of robustness, and item difficulty ordering. While testlets may be linear or hierarchical, the most useful ones are four-level hierarchical units, containing 15 items and partitioning examinees into 16 classes. (GDC)
Descriptors: Adaptive Testing, Computer Assisted Testing, Context Effect, Item Banks
Rizavi, Saba; Way, Walter D.; Davey, Tim; Herbert, Erin – ETS Research Report Series, 2004
Item parameter estimates vary for a variety of reasons, including estimation error, characteristics of the examinee samples, and context effects (e.g., item location effects, section location effects, etc.). Although we expect variation based on theory, there is reason to believe that observed variation in item parameter estimates exceeds what…
Descriptors: Test Items, Computer Assisted Testing, Computation, Adaptive Testing

Brennan, Robert L. – Applied Measurement in Education, 1992
A conceptual framework and heuristic model for considering the existence, magnitude, and consequences of context effects are presented through an extension of some generalizability theory concepts. Context effects are often misunderstood, and current measurement models have serious limitations for examining them. Their importance needs to be…
Descriptors: Adaptive Testing, Context Effect, Equated Scores, Equations (Mathematics)

Boodoo, Gwyneth M. – Journal of Negro Education, 1998
Discusses the research and steps needed to develop performance-based and computer-adaptive assessments that are culturally responsive. Supports the development of a new conceptual framework and more explicit guidelines for designing culturally responsive assessments. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Context Effect, Cultural Awareness
Rizavi, Saba; Way, Walter D.; Davey, Tim; Herbert, Erin – Educational Testing Service, 2004
Item parameter estimates vary for a variety of reasons, including estimation error, characteristics of the examinee samples, and context effects (e.g., item location effects, section location effects, etc.). Although we expect variation based on theory, there is reason to believe that observed variation in item parameter estimates exceeds what…
Descriptors: Adaptive Testing, Test Items, Computation, Context Effect