NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 721 to 735 of 1,334 results Save | Export
Zhu, Daming; Fan, Meichu – 1999
The convention for selecting starting points (that is, initial items) on a computerized adaptive test (CAT) is to choose as starting points items of medium difficulty for all examinees. Selecting a starting point based on prior information about an individual's ability was first suggested many years ago, but has been believed unimportant provided…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Difficulty Level
Peer reviewed Peer reviewed
Roos, Linda L.; And Others – Educational and Psychological Measurement, 1996
This article describes Minnesota Computerized Adaptive Testing Language program code for using the MicroCAT 3.5 testing software to administer several types of self-adapted tests. Code is provided for: a basic self-adapted test; a self-adapted version of an adaptive mastery test; and a restricted self-adapted test. (Author/SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Mastery Tests, Programming
Peer reviewed Peer reviewed
Stocking, Martha L. – Journal of Educational and Behavioral Statistics, 1996
An alternative method for scoring adaptive tests, based on number-correct scores, is explored and compared with a method that relies more directly on item response theory. Using the number-correct score with necessary adjustment for intentional differences in adaptive test difficulty is a statistically viable scoring method. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Item Response Theory
Peer reviewed Peer reviewed
Potenza, Maria T.; Stocking, Martha L. – Journal of Educational Measurement, 1997
Common strategies for dealing with flawed items in conventional testing, grounded in principles of fairness to examinees, are re-examined in the context of adaptive testing. The additional strategy of retesting from a pool cleansed of flawed items is found, through a Monte Carlo study, to bring about no practical improvement. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Monte Carlo Methods
Peer reviewed Peer reviewed
Warm, Thomas A. – Psychometrika, 1989
A new estimation method, Weighted Likelihood Estimation (WLE), is derived mathematically. Two Monte Carlo studies compare WLE with maximum likelihood estimation and Bayesian modal estimation of ability in conventional tests and tailored tests. Advantages of WLE are discussed. (SLD)
Descriptors: Ability, Adaptive Testing, Equations (Mathematics), Estimation (Mathematics)
Peer reviewed Peer reviewed
Folk, Valerie Greaud; Green, Bert F. – Applied Psychological Measurement, 1989
Some effects of using unidimensional item response theory (IRT) were examined when the assumption of unidimensionality was violated. Adaptive and nonadaptive tests were used. It appears that use of a unidimensional model can bias parameter estimation, adaptive item selection, and ability estimation for the two types of testing. (TJH)
Descriptors: Ability Identification, Adaptive Testing, Computer Assisted Testing, Computer Simulation
Peer reviewed Peer reviewed
Dodd, Barbara G.; And Others – Applied Psychological Measurement, 1989
General guidelines are developed to assist practitioners in devising operational computerized adaptive testing systems based on the graded response model. The effects of the following major variables were examined: item pool size; stepsize used along the trait continuum until maximum likelihood estimation could be calculated; and stopping rule…
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Simulation, Item Banks
Peer reviewed Peer reviewed
Reckase, Mark D. – Educational Measurement: Issues and Practice, 1989
Requirements for adaptive testing are reviewed, and the reasons implementation has taken so long are explored. The adaptive test is illustrated through the Stanford-Binet Intelligence Scale of L. M. Terman and M. A. Merrill (1960). Current adaptive testing is tied to the development of item response theory. (SLD)
Descriptors: Adaptive Testing, Educational Development, Elementary Secondary Education, Latent Trait Theory
Peer reviewed Peer reviewed
Kingsbury, G. Gage; Zara, Anthony R. – Applied Measurement in Education, 1991
This simulation investigated two procedures that reduce differences between paper-and-pencil testing and computerized adaptive testing (CAT) by making CAT content sensitive. Results indicate that the price in terms of additional test items of using constrained CAT for content balancing is much smaller than that of using testlets. (SLD)
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Computer Simulation
Peer reviewed Peer reviewed
Birenbaum, Menucha – Studies in Educational Evaluation, 1994
A scheme is introduced for classifying assessment methods by using a mapping sentence and examples of three tasks from research methodology are provided along with their profiles (structures) based on the mapping sentence. An instrument to determine student assessment preferences is presented and explored. (SLD)
Descriptors: Adaptive Testing, Classification, Educational Assessment, Measures (Individuals)
Peer reviewed Peer reviewed
Wang, Tianyou; Vispoel, Walter P. – Journal of Educational Measurement, 1998
Used simulations of computerized adaptive tests to evaluate results yielded by four commonly used ability estimation methods: maximum likelihood estimation (MLE) and three Bayesian approaches. Results show clear distinctions between MLE and Bayesian methods. (SLD)
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing
Peer reviewed Peer reviewed
van der Linden, Wim J. – Journal of Educational and Behavioral Statistics, 1999
Proposes an algorithm that minimizes the asymptotic variance of the maximum-likelihood (ML) estimator of a linear combination of abilities of interest. The criterion results in a closed-form expression that is easy to evaluate. Also shows how the algorithm can be modified if the interest is in a test with a "simple ability structure."…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
Peer reviewed Peer reviewed
O'Neill, Thomas; Lunz, Mary E.; Thiede, Keith – Journal of Applied Measurement, 2000
Studied item exposure in a computerized adaptive test when the item selection algorithm presents examinees with questions they were asked in a previous test administration. Results with 178 repeat examinees on a medical technologists' test indicate that the combined use of an adaptive algorithm to select items and latent trait theory to estimate…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Item Response Theory
Peer reviewed Peer reviewed
Mosenthal, Peter B. – American Educational Research Journal, 1998
The extent to which variables from a previous study (P. Mosenthal, 1996) on document processing influenced difficulty on 165 tasks from the pose scales of five national adult literacy scales was studied. Three process variables accounted for 78% of the variance when prose task difficulty was defined using level scores. Implications for computer…
Descriptors: Adaptive Testing, Adults, Computer Assisted Testing, Definitions
Peer reviewed Peer reviewed
Bennett, Randy Elliot; Morley, Mary; Quardt, Dennis – Applied Psychological Measurement, 2000
Describes three open-ended response types that could broaden the conception of mathematical problem solving used in computerized admissions tests: (1) mathematical expression (ME); (2) generating examples (GE); and (3) and graphical modeling (GM). Illustrates how combining ME, GE, and GM can form extended constructed response problems. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Constructed Response, Mathematics Tests
Pages: 1  |  ...  |  45  |  46  |  47  |  48  |  49  |  50  |  51  |  52  |  53  |  ...  |  89