Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 2 |
Descriptor
Source
Applied Psychological… | 4 |
Journal of Educational… | 3 |
Applied Measurement in… | 2 |
Educational and Psychological… | 2 |
Psychometrika | 2 |
Educational Technology &… | 1 |
Journal of Educational and… | 1 |
Turkish Online Journal of… | 1 |
Author
De Ayala, R. J. | 3 |
Zwick, Rebecca | 3 |
Dodd, Barbara G. | 2 |
Wainer, Howard | 2 |
van der Linden, Wim J. | 2 |
Chang, Hua-Hua | 1 |
Chen, Po-Hsi | 1 |
Dayton, C. Mitchell | 1 |
Douglas, Jeff | 1 |
Eignor, Daniel R. | 1 |
Ho, Rong-Guey | 1 |
More ▼ |
Publication Type
Reports - Evaluative | 23 |
Journal Articles | 16 |
Speeches/Meeting Papers | 4 |
Guides - Non-Classroom | 1 |
Reports - Research | 1 |
Education Level
Elementary Education | 1 |
Grade 6 | 1 |
Audience
Practitioners | 1 |
Location
Taiwan | 2 |
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 1 |
Law School Admission Test | 1 |
SAT (College Admission Test) | 1 |
What Works Clearinghouse Rating
Wang, Shiyu; Lin, Haiyan; Chang, Hua-Hua; Douglas, Jeff – Journal of Educational Measurement, 2016
Computerized adaptive testing (CAT) and multistage testing (MST) have become two of the most popular modes in large-scale computer-based sequential testing. Though most designs of CAT and MST exhibit strength and weakness in recent large-scale implementations, there is no simple answer to the question of which design is better because different…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Format, Sequential Approach
Liao, Wen-Wei; Ho, Rong-Guey – Turkish Online Journal of Educational Technology - TOJET, 2011
One of the major weaknesses of the item exposure rates of figural items in Intelligence Quotient (IQ) tests lies in its inaccuracies. In this study, a new approach is proposed and a useful test tool known as the Virtual Item Bank (VIB) is introduced. The VIB combine Automatic Item Generation theory and image processing theory with the concepts of…
Descriptors: Intelligence Quotient, Intelligence Tests, Computer Assisted Testing, Adaptive Testing
van der Linden, Wim J. – Journal of Educational and Behavioral Statistics, 2003
The Hetter and Sympson (1997; 1985) method is a method of probabilistic item-exposure control in computerized adaptive testing. Setting its control parameters to admissible values requires an iterative process of computer simulations that has been found to be time consuming, particularly if the parameters have to be set conditional on a realistic…
Descriptors: Law Schools, Adaptive Testing, Admission (School), Computer Assisted Testing

Dodd, Barbara G.; And Others – Applied Psychological Measurement, 1989
General guidelines are developed to assist practitioners in devising operational computerized adaptive testing systems based on the graded response model. The effects of the following major variables were examined: item pool size; stepsize used along the trait continuum until maximum likelihood estimation could be calculated; and stopping rule…
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Simulation, Item Banks

Kingsbury, G. Gage; Zara, Anthony R. – Applied Measurement in Education, 1991
This simulation investigated two procedures that reduce differences between paper-and-pencil testing and computerized adaptive testing (CAT) by making CAT content sensitive. Results indicate that the price in terms of additional test items of using constrained CAT for content balancing is much smaller than that of using testlets. (SLD)
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Computer Simulation

Zwick, Rebecca; And Others – Journal of Educational Measurement, 1995
In a simulation study of ability and estimation of differential item functioning (DIF) in computerized adaptive tests, Rasch-based DIF statistics were highly correlated with generating DIF, but DIF statistics tended to be slightly smaller than in the three-parameter logistic model analyses. (SLD)
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Computer Simulation

Macready, George B.; Dayton, C. Mitchell – Psychometrika, 1992
An adaptive testing algorithm is presented based on an alternative modeling framework, and its effectiveness is investigated in a simulation based on real data. The algorithm uses a latent class modeling framework in which assessed latent attributes are assumed to be categorical variables. (SLD)
Descriptors: Adaptive Testing, Algorithms, Bayesian Statistics, Classification
Wang, Wen-Chung; Chen, Po-Hsi – Applied Psychological Measurement, 2004
Multidimensional adaptive testing (MAT) procedures are proposed for the measurement of several latent traits by a single examination. Bayesian latent trait estimation and adaptive item selection are derived. Simulations were conducted to compare the measurement efficiency of MAT with those of unidimensional adaptive testing and random…
Descriptors: Item Analysis, Adaptive Testing, Computer Assisted Testing, Computer Simulation

Samejima, Fumiko – Psychometrika, 1994
Using the constant information model, constant amounts of test information, and a finite interval of ability, simulated data were produced for 8 ability levels and 20 numbers of test items. Analyses suggest that it is desirable to consider modifying test information functions when they measure accuracy in ability estimation. (SLD)
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Computer Simulation
Wainer, Howard; And Others – 1991
A series of computer simulations was run to measure the relationship between testlet validity and the factors of item pool size and testlet length for both adaptive and linearly constructed testlets. Results confirmed the generality of earlier empirical findings of H. Wainer and others (1991) that making a testlet adaptive yields only marginal…
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Simulation, Item Banks

Zwick, Rebecca; And Others – Applied Psychological Measurement, 1994
Simulated data were used to investigate the performance of modified versions of the Mantel-Haenszel method of differential item functioning (DIF) analysis in computerized adaptive tests (CAT). Results indicate that CAT-based DIF procedures perform well and support the use of item response theory-based matching variables in DIF analysis. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Simulation, Error of Measurement
Li, Yuan H.; Schafer, William D. – Applied Psychological Measurement, 2005
Under a multidimensional item response theory (MIRT) computerized adaptive testing (CAT) testing scenario, a trait estimate (theta) in one dimension will provide clues for subsequently seeking a solution in other dimensions. This feature may enhance the efficiency of MIRT CAT's item selection and its scoring algorithms compared with its…
Descriptors: Adaptive Testing, Item Banks, Computation, Psychological Studies
van der Linden, Wim J.; Reese, Lynda M. – 1997
A model for constrained computerized adaptive testing is proposed in which the information in the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum information at the current ability estimate fixing…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Computer Simulation

De Ayala, R. J.; And Others – Applied Measurement in Education, 1992
A study involving 1,000 simulated examinees compared the partial credit and graded response models in computerized adaptive testing (CAT). The graded response model fit the data well and provided slightly more accurate ability estimates than those of the partial credit model. Benefits of polytomous model-based CATs are discussed. (SLD)
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Computer Simulation

Wainer, Howard; And Others – Journal of Educational Measurement, 1992
Computer simulations were run to measure the relationship between testlet validity and factors of item pool size and testlet length for both adaptive and linearly constructed testlets. Making a testlet adaptive yields only modest increases in aggregate validity because of the peakedness of the typical proficiency distribution. (Author/SLD)
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Computer Simulation
Previous Page | Next Page ยป
Pages: 1 | 2