Publication Date
In 2025 | 1 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 8 |
Descriptor
Computer Assisted Testing | 14 |
Context Effect | 14 |
Test Items | 14 |
Adaptive Testing | 6 |
Item Response Theory | 5 |
Psychometrics | 4 |
Scores | 4 |
Computation | 3 |
High Stakes Tests | 3 |
Item Banks | 3 |
Reading Tests | 3 |
More ▼ |
Source
Author
Davey, Tim | 3 |
Herbert, Erin | 2 |
Pomplun, Mark | 2 |
Rizavi, Saba | 2 |
Way, Walter D. | 2 |
Agrusti, Gabriella | 1 |
Ainley, John | 1 |
Albano, Anthony D. | 1 |
Arrufat-Marques, Maria-Jose | 1 |
Aryadoust, Vahid | 1 |
Brown, Anna | 1 |
More ▼ |
Publication Type
Journal Articles | 11 |
Reports - Research | 8 |
Reports - Descriptive | 3 |
Reports - Evaluative | 3 |
Numerical/Quantitative Data | 1 |
Opinion Papers | 1 |
Education Level
Audience
Location
United Kingdom | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 1 |
National Assessment of… | 1 |
What Works Clearinghouse Rating
Ye Ma; Deborah J. Harris – Educational Measurement: Issues and Practice, 2025
Item position effect (IPE) refers to situations where an item performs differently when it is administered in different positions on a test. The majority of previous research studies have focused on investigating IPE under linear testing. There is a lack of IPE research under adaptive testing. In addition, the existence of IPE might violate Item…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Test Items
Albano, Anthony D.; McConnell, Scott R.; Lease, Erin M.; Cai, Liuhan – Grantee Submission, 2020
Research has shown that the context of practice tasks can have a significant impact on learning, with long-term retention and transfer improving when tasks of different types are mixed by interleaving (abcabcabc) compared with grouping together in blocks (aaabbbccc). This study examines the influence of context via interleaving from a psychometric…
Descriptors: Context Effect, Test Items, Preschool Children, Computer Assisted Testing
Zhu, Xuelian; Aryadoust, Vahid – Computer Assisted Language Learning, 2022
A fundamental requirement of language assessments which is underresearched in computerized assessments is impartiality (fairness) or equal treatment of test takers regardless of background. The present study aimed to evaluate fairness in the Pearson Test of English (PTE) Academic Reading test, which is a computerized reading assessment, by…
Descriptors: Computer Assisted Testing, Language Tests, Native Language, Culture Fair Tests
Schulz, Wolfram; Fraillon, Julian; Losito, Bruno; Agrusti, Gabriella; Ainley, John; Damiani, Valeria; Friedman, Tim – International Association for the Evaluation of Educational Achievement, 2022
The purpose of the International Civic and Citizenship Education Study (ICCS) is to investigate the changing ways in which young people are prepared to undertake their roles as citizens across a wide range of countries. This assessment framework provides a conceptual underpinning for the international instrumentation for ICCS 2022. It needs to…
Descriptors: Citizenship Education, Context Effect, Guidelines, Course Content
Lin, Yin; Brown, Anna – Educational and Psychological Measurement, 2017
A fundamental assumption in computerized adaptive testing is that item parameters are invariant with respect to context--items surrounding the administered item. This assumption, however, may not hold in forced-choice (FC) assessments, where explicit comparisons are made between items included in the same block. We empirically examined the…
Descriptors: Personality Measures, Measurement Techniques, Context Effect, Test Items
Taguchi, Naoko; Gomez-Laich, Maria Pia; Arrufat-Marques, Maria-Jose – Foreign Language Annals, 2016
This study investigated comprehension of indirect meaning among learners of L2 Spanish via an original computer-delivered multimedia listening test. The comprehension of implied speaker intention is a type of indirect communication that involves the ability to understand implied intention by using linguistic knowledge, contextual cues, and the…
Descriptors: Computer Assisted Testing, Multimedia Materials, Language Tests, Spanish
National Center for Education Statistics, 2013
This document provides an overview of the National Assessment of Educational Progress (NAEP). NAEP is the largest nationally representative and continuing assessment of what students in the United States know and can do in various subjects. NAEP serves a different role than state assessments. States have their own unique assessments which are…
Descriptors: Student Evaluation, National Surveys, Academic Achievement, Grade 4
Davey, Tim; Lee, Yi-Hsuan – ETS Research Report Series, 2011
Both theoretical and practical considerations have led the revision of the Graduate Record Examinations® (GRE®) revised General Test, here called the rGRE, to adopt a multistage adaptive design that will be continuously or nearly continuously administered and that can provide immediate score reporting. These circumstances sharply constrain the…
Descriptors: Context Effect, Scoring, Equated Scores, College Entrance Examinations

Wainer, Howard; Kiely, Gerard L. – Journal of Educational Measurement, 1987
The testlet, a bundle of test items, alleviates some problems associated with computerized adaptive testing: context effects, lack of robustness, and item difficulty ordering. While testlets may be linear or hierarchical, the most useful ones are four-level hierarchical units, containing 15 items and partitioning examinees into 16 classes. (GDC)
Descriptors: Adaptive Testing, Computer Assisted Testing, Context Effect, Item Banks
Pomplun, Mark; Custer, Michael – Applied Measurement in Education, 2005
In this study, we investigated possible context effects when students chose to defer items and answer those items later during a computerized test. In 4 primary school reading tests, 126 items were studied. Logistic regression analyses identified 4 items across 4 grade levels as statistically significant. However, follow-up analyses indicated that…
Descriptors: Psychometrics, Reading Tests, Effect Size, Test Items
Rizavi, Saba; Way, Walter D.; Davey, Tim; Herbert, Erin – ETS Research Report Series, 2004
Item parameter estimates vary for a variety of reasons, including estimation error, characteristics of the examinee samples, and context effects (e.g., item location effects, section location effects, etc.). Although we expect variation based on theory, there is reason to believe that observed variation in item parameter estimates exceeds what…
Descriptors: Test Items, Computer Assisted Testing, Computation, Adaptive Testing
Pomplun, Mark; Ritchie, Timothy – Journal of Educational Computing Research, 2004
This study investigated the statistical and practical significance of context effects for items randomized within testlets for administration during a series of computerized non-adaptive tests. One hundred and twenty-five items from four primary school reading tests were studied. Logistic regression analyses identified from one to four items for…
Descriptors: Psychometrics, Context Effect, Effect Size, Primary Education

Clariana, Roy B. – International Journal of Instructional Media, 2004
This investigation considers the instructional effects of color as an over-arching context variable when learning from computer displays. The purpose of this investigation is to examine the posttest retrieval effects of color as a local, extra-item non-verbal lesson context variable for constructed-response versus multiple-choice posttest…
Descriptors: Instructional Effectiveness, Graduate Students, Color, Computer System Design
Rizavi, Saba; Way, Walter D.; Davey, Tim; Herbert, Erin – Educational Testing Service, 2004
Item parameter estimates vary for a variety of reasons, including estimation error, characteristics of the examinee samples, and context effects (e.g., item location effects, section location effects, etc.). Although we expect variation based on theory, there is reason to believe that observed variation in item parameter estimates exceeds what…
Descriptors: Adaptive Testing, Test Items, Computation, Context Effect