Publication Date
In 2025 | 1 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 15 |
Descriptor
Adaptive Testing | 16 |
Computer Assisted Testing | 14 |
Test Items | 9 |
Simulation | 7 |
Item Response Theory | 5 |
Comparative Analysis | 4 |
Accuracy | 3 |
Cheating | 3 |
Evaluation Methods | 3 |
Item Banks | 3 |
Psychometrics | 3 |
More ▼ |
Source
International Journal of… | 16 |
Author
Tay, Louis | 2 |
Aksu Dunya, Beyza | 1 |
Bass, Michael | 1 |
Beyza Aksu Dunya | 1 |
Bonner, Cavan V. | 1 |
Chernyshenko, Oleksandr S. | 1 |
Drasgow, Fritz | 1 |
Glas, Cees A. W. | 1 |
Guo, Fanmin | 1 |
Guo, Hongwen | 1 |
Guo, Jing | 1 |
More ▼ |
Publication Type
Journal Articles | 16 |
Reports - Research | 13 |
Reports - Evaluative | 2 |
Information Analyses | 1 |
Reports - Descriptive | 1 |
Education Level
Early Childhood Education | 1 |
Elementary Education | 1 |
Grade 3 | 1 |
Grade 4 | 1 |
Grade 5 | 1 |
Higher Education | 1 |
Intermediate Grades | 1 |
Middle Schools | 1 |
Primary Education | 1 |
Audience
Practitioners | 1 |
Researchers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Beyza Aksu Dunya; Stefanie Wind – International Journal of Testing, 2025
We explored the practicality of relatively small item pools in the context of low-stakes Computer-Adaptive Testing (CAT), such as CAT procedures that might be used for quick diagnostic or screening exams. We used a basic CAT algorithm without content balancing and exposure control restrictions to reflect low stakes testing scenarios. We examined…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Achievement
Liou, Gloria; Bonner, Cavan V.; Tay, Louis – International Journal of Testing, 2022
With the advent of big data and advances in technology, psychological assessments have become increasingly sophisticated and complex. Nevertheless, traditional psychometric issues concerning the validity, reliability, and measurement bias of such assessments remain fundamental in determining whether score inferences of human attributes are…
Descriptors: Psychometrics, Computer Assisted Testing, Adaptive Testing, Data
Wang, Xi; Liu, Yang; Robin, Frederic; Guo, Hongwen – International Journal of Testing, 2019
In an on-demand testing program, some items are repeatedly used across test administrations. This poses a risk to test security. In this study, we considered a scenario wherein a test was divided into two subsets: one consisting of secure items and the other consisting of possibly compromised items. In a simulation study of multistage adaptive…
Descriptors: Identification, Methods, Test Items, Cheating
Morris, Scott B.; Bass, Michael; Howard, Elizabeth; Neapolitan, Richard E. – International Journal of Testing, 2020
The standard error (SE) stopping rule, which terminates a computer adaptive test (CAT) when the "SE" is less than a threshold, is effective when there are informative questions for all trait levels. However, in domains such as patient-reported outcomes, the items in a bank might all target one end of the trait continuum (e.g., negative…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Banks, Item Response Theory
Luo, Xiao; Wang, Xinrui – International Journal of Testing, 2019
This study introduced dynamic multistage testing (dy-MST) as an improvement to existing adaptive testing methods. dy-MST combines the advantages of computerized adaptive testing (CAT) and computerized adaptive multistage testing (ca-MST) to create a highly efficient and regulated adaptive testing method. In the test construction phase, multistage…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Construction, Psychometrics
Aksu Dunya, Beyza – International Journal of Testing, 2018
This study was conducted to analyze potential item parameter drift (IPD) impact on person ability estimates and classification accuracy when drift affects an examinee subgroup. Using a series of simulations, three factors were manipulated: (a) percentage of IPD items in the CAT exam, (b) percentage of examinees affected by IPD, and (c) item pool…
Descriptors: Adaptive Testing, Classification, Accuracy, Computer Assisted Testing
Wei, Hua; Lin, Jie – International Journal of Testing, 2015
Out-of-level testing refers to the practice of assessing a student with a test that is intended for students at a higher or lower grade level. Although the appropriateness of out-of-level testing for accountability purposes has been questioned by educators and policymakers, incorporating out-of-level items in formative assessments for accurate…
Descriptors: Test Items, Computer Assisted Testing, Adaptive Testing, Instructional Program Divisions
Rios, Joseph A.; Sireci, Stephen G. – International Journal of Testing, 2014
The International Test Commission's "Guidelines for Translating and Adapting Tests" (2010) provide important guidance on developing and evaluating tests for use across languages. These guidelines are widely applauded, but the degree to which they are followed in practice is unknown. The objective of this study was to perform a…
Descriptors: Guidelines, Translation, Adaptive Testing, Second Languages
Talento-Miller, Eileen; Guo, Fanmin; Han, Kyung T. – International Journal of Testing, 2013
When power tests include a time limit, it is important to assess the possibility of speededness for examinees. Past research on differential speededness has examined gender and ethnic subgroups in the United States on paper and pencil tests. When considering the needs of a global audience, research regarding different native language speakers is…
Descriptors: Adaptive Testing, Computer Assisted Testing, English, Scores
Stark, Stephen; Chernyshenko, Oleksandr S. – International Journal of Testing, 2011
This article delves into a relatively unexplored area of measurement by focusing on adaptive testing with unidimensional pairwise preference items. The use of such tests is becoming more common in applied non-cognitive assessment because research suggests that this format may help to reduce certain types of rater error and response sets commonly…
Descriptors: Test Length, Simulation, Adaptive Testing, Item Analysis
Makransky, Guido; Glas, Cees A. W. – International Journal of Testing, 2013
Cognitive ability tests are widely used in organizations around the world because they have high predictive validity in selection contexts. Although these tests typically measure several subdomains, testing is usually carried out for a single subdomain at a time. This can be ineffective when the subdomains assessed are highly correlated. This…
Descriptors: Foreign Countries, Cognitive Ability, Adaptive Testing, Feedback (Response)
Guo, Jing; Tay, Louis; Drasgow, Fritz – International Journal of Testing, 2009
Test compromise is a concern in cognitive ability testing because such tests are widely used in employee selection and administered on a continuous basis. In this study, the resistance of cognitive tests, deployed in different test systems, to small-scale cheating conspiracies, was evaluated regarding the accuracy of ability estimation.…
Descriptors: Cheating, Cognitive Tests, Adaptive Testing, Computer Assisted Testing
Veldkamp, Bernard P.; van der Linden, Wim J. – International Journal of Testing, 2008
In most operational computerized adaptive testing (CAT) programs, the Sympson-Hetter (SH) method is used to control the exposure of the items. Several modifications and improvements of the original method have been proposed. The Stocking and Lewis (1998) version of the method uses a multinomial experiment to select items. For severely constrained…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Items, Methods
Schmitt, T. A.; Sass, D. A.; Sullivan, J. R.; Walker, C. M. – International Journal of Testing, 2010
Imposed time limits on computer adaptive tests (CATs) can result in examinees having difficulty completing all items, thus compromising the validity and reliability of ability estimates. In this study, the effects of speededness were explored in a simulated CAT environment by varying examinee response patterns to end-of-test items. Expectedly,…
Descriptors: Monte Carlo Methods, Simulation, Computer Assisted Testing, Adaptive Testing
Papanastasiou, Elena C.; Reckase, Mark D. – International Journal of Testing, 2007
Because of the increased popularity of computerized adaptive testing (CAT), many admissions tests, as well as certification and licensure examinations, have been transformed from their paper-and-pencil versions to computerized adaptive versions. A major difference between paper-and-pencil tests and CAT from an examinee's point of view is that in…
Descriptors: Simulation, Adaptive Testing, Computer Assisted Testing, Test Items
Previous Page | Next Page ยป
Pages: 1 | 2