Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 5 |
Descriptor
Computer Assisted Testing | 6 |
Item Response Theory | 3 |
Simulation | 3 |
Test Items | 3 |
Adaptive Testing | 2 |
Computation | 2 |
Computer Software | 2 |
Reliability | 2 |
Ability | 1 |
Achievement Tests | 1 |
Bias | 1 |
More ▼ |
Source
International Journal of… | 6 |
Author
Brown, Richard S. | 1 |
Carlstedt, Berit | 1 |
Gustafsson, Jan-Eric | 1 |
Lievens, Filip | 1 |
Rupp, Andre A. | 1 |
Sass, D. A. | 1 |
Schmitt, T. A. | 1 |
Sullivan, J. R. | 1 |
Ullstadius, Eva | 1 |
Veldkamp, Bernard P. | 1 |
Villarreal, Julio C. | 1 |
More ▼ |
Publication Type
Journal Articles | 6 |
Reports - Evaluative | 6 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Schmitt, T. A.; Sass, D. A.; Sullivan, J. R.; Walker, C. M. – International Journal of Testing, 2010
Imposed time limits on computer adaptive tests (CATs) can result in examinees having difficulty completing all items, thus compromising the validity and reliability of ability estimates. In this study, the effects of speededness were explored in a simulated CAT environment by varying examinee response patterns to end-of-test items. Expectedly,…
Descriptors: Monte Carlo Methods, Simulation, Computer Assisted Testing, Adaptive Testing
Ullstadius, Eva; Carlstedt, Berit; Gustafsson, Jan-Eric – International Journal of Testing, 2008
The influence of general and verbal ability on each of 72 verbal analogy test items were investigated with new factor analytical techniques. The analogy items together with the Computerized Swedish Enlistment Battery (CAT-SEB) were given randomly to two samples of 18-year-old male conscripts (n = 8566 and n = 5289). Thirty-two of the 72 items had…
Descriptors: Test Items, Verbal Ability, Factor Analysis, Swedish
Veldkamp, Bernard P. – International Journal of Testing, 2008
Integrity[TM], an online application for testing both the statistical integrity of the test and the academic integrity of the examinees, was evaluated for this review. Program features and the program output are described. An overview of the statistics in Integrity[TM] is provided, and the application is illustrated with a small simulation study.…
Descriptors: Simulation, Integrity, Statistics, Computer Assisted Testing
Lievens, Filip – International Journal of Testing, 2006
The International Test Commission's (this issue) Guidelines on Computer-Based and Internet-Delivered Testing constitute a comprehensive and excellent set of internationally agreed guidelines. This article looks forward and discusses what should be done to ensure that these guidelines will be used in educational, clinical, and organizational…
Descriptors: Professional Associations, Computer Assisted Testing, Internet, Guidelines
Brown, Richard S.; Villarreal, Julio C. – International Journal of Testing, 2007
There has been considerable research regarding the extent to which psychometric sound assessments sometimes yield individual score estimates that are inconsistent with the response patterns of the individual. It has been suggested that individual response patterns may differ from expectations for a number of reasons, including subject motivation,…
Descriptors: Psychometrics, Test Bias, Testing, Simulation
Rupp, Andre A. – International Journal of Testing, 2003
Item response theory (IRT) has become one of the most popular scoring frameworks for measurement data. IRT models are used frequently in computerized adaptive testing, cognitively diagnostic assessment, and test equating. This article reviews two of the most popular software packages for IRT model estimation, BILOG-MG (Zimowski, Muraki, Mislevy, &…
Descriptors: Test Items, Adaptive Testing, Item Response Theory, Computer Software