NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 691 to 705 of 1,352 results Save | Export
Kim, Jong-Pil – 1999
This study was conducted to investigate the equivalence of scores from paper-and-pencil (P&P) tests and computerized tests (CTs) through meta-analysis of primary studies using both kinds of tests. For this synthesis, 51 primary studies were selected, resulting in 226 effect sizes. The first synthesis was a typical meta-analysis that treated…
Descriptors: Adaptive Testing, Computer Assisted Testing, Effect Size, Meta Analysis
School Renaissance Inst., Inc., Madison, WI. – 2000
A study evaluated comparatively the Scholastic Reading Inventory (SRI) Interactive Test and Advantage Learning Systems' STAR Reading Computer-Adaptive Standardized Test. Due to the different methods used for collecting and calculating norm-referenced scores in the two tests, scale score measures of reading performance were used for the comparative…
Descriptors: Adaptive Testing, Comparative Analysis, Comparative Testing, Elementary Secondary Education
Zwick, Rebecca; Thayer, Dorothy T. – 2003
This study investigated the applicability to computerized adaptive testing (CAT) data of a differential item functioning (DIF) analysis that involves an empirical Bayes (EB) enhancement of the popular Mantel Haenszel (MH) DIF analysis method. The computerized Law School Admission Test (LSAT) assumed for this study was similar to that currently…
Descriptors: Adaptive Testing, Bayesian Statistics, College Entrance Examinations, Computer Assisted Testing
Thompson, Tony D.; Davey, Tim – 2000
This paper applies specific information item selection using a method developed by T. Davey and M. Fan (2000) to a multiple-choice passage-based reading test that is being developed for computer administration. Data used to calibrate the multidimensional item parameters for the simulation study consisted of item responses from randomly equivalent…
Descriptors: Adaptive Testing, Computer Assisted Testing, Reading Tests, Selection
Habick, Timothy – 1999
With the advent of computer-based testing (CBT) and the need to increase the number of items available in computer adaptive test pools, the idea of item variants was conceived. An item variant can be defined as an item with content based on an existing item to a greater or lesser degree. Item variants were first proposed as a way to enhance test…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Test Construction
Raiche, Gilles; Blais, Jean-Guy – 2002
In a computerized adaptive test (CAT), it would be desirable to obtain an acceptable precision of the proficiency level estimate using an optimal number of items. Decreasing the number of items is accompanied, however, by a certain degree of bias when the true proficiency level differs significantly from the a priori estimate. G. Raiche (2000) has…
Descriptors: Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics), Item Response Theory
van der Linden, Wim J. – 2000
A method based on 0-1 linear programming (LP) is presented to stratify an item pool optimally for use in "alpha"-stratified adaptive testing. Because the 0-1 LP model belongs to the subclass of models with a network-flow structure, efficient solutions are possible. The method is applied to a previous item pool from the computerized…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Linear Programming
Peer reviewed Peer reviewed
McKinley, Robert L.; Reckase, Mark D. – AEDS Journal, 1980
Describes tailored testing (in which a computer selects appropriate items from an item bank while an examinee is taking a test) and shows it to be superior to paper-and-pencil tests in such areas as reliability, security, and appropriateness of items. (IRT)
Descriptors: Adaptive Testing, Computer Assisted Testing, Higher Education, Program Evaluation
Peer reviewed Peer reviewed
Glas, Cees A. W.; van der Linden, Wim J. – Applied Psychological Measurement, 2003
Developed a multilevel item response (IRT) model that allows for differences between the distributions of item parameters of families of item clones. Results from simulation studies based on an item pool from the Law School Admission Test illustrate the accuracy of the item pool calibration and adaptive testing procedures based on the model. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Item Response Theory
Peer reviewed Peer reviewed
Schnipke, Deborah L.; Green, Bert F. – Journal of Educational Measurement, 1995
Two item selection algorithms, one based on maximal differentiation between examinees and one based on item response theory and maximum information for each examinee, were compared in simulated linear and adaptive tests of cognitive ability. Adaptive tests based on maximum information were clearly superior. (SLD)
Descriptors: Adaptive Testing, Algorithms, Comparative Analysis, Item Response Theory
Peer reviewed Peer reviewed
Laatsch, Linda; Choca, James – Psychological Assessment, 1994
The authors propose using cluster analysis to develop a branching logic that would allow the adaptive administration of psychological instruments. The proposed methodology is described in detail and used to develop an adaptive version of the Halstead Category Test from archival data. (SLD)
Descriptors: Adaptive Testing, Cluster Analysis, Computer Assisted Testing, Psychological Testing
Peer reviewed Peer reviewed
van der Linden, Wim J. – Psychometrika, 1998
This paper suggests several item selection criteria for adaptive testing that are all based on the use of the true posterior. Some of the ability estimators produced by these criteria are discussed and empirically criticized. (SLD)
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing
Peer reviewed Peer reviewed
May, Kim; Nicewander, W. Alan – Educational and Psychological Measurement, 1998
The degree to which scale distortion in the ordinary difference score can be removed by using differences based on estimated examinee proficiency (theta) in either conventional or adaptive testing situations was studied using Item Response Theory. Using estimated thetas removed much scale distortion for both conventional and adaptive tests. (SLD)
Descriptors: Ability, Achievement Gains, Adaptive Testing, Estimation (Mathematics)
Peer reviewed Peer reviewed
Neuman, George; Baydoun, Ramzi – Applied Psychological Measurement, 1998
Studied the cross-mode equivalence of paper-and-pencil and computer-based clerical tests with 141 undergraduates. Found no differences across modes for the two types of tests. Differences can be minimized when speeded computerized tests follow the same administration and response procedures as the paper format. (SLD)
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Higher Education
Peer reviewed Peer reviewed
Vispoel, Walter P. – Journal of Educational Measurement, 1998
Compared results from computer-adaptive and self-adaptive tests under conditions in which item review was and was not permitted for 379 college students. Results suggest that, when given the opportunity, most examinees will change answers, but usually only to a small portion of items, resulting in some benefit to the test taker. (SLD)
Descriptors: Adaptive Testing, College Students, Computer Assisted Testing, Higher Education
Pages: 1  |  ...  |  43  |  44  |  45  |  46  |  47  |  48  |  49  |  50  |  51  |  ...  |  91