NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 16 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Diao, Qi; van der Linden, Wim J. – Applied Psychological Measurement, 2013
Automated test assembly uses the methodology of mixed integer programming to select an optimal set of items from an item bank. Automated test-form generation uses the same methodology to optimally order the items and format the test form. From an optimization point of view, production of fully formatted test forms directly from the item pool using…
Descriptors: Automation, Test Construction, Test Format, Item Banks
Peer reviewed Peer reviewed
Direct linkDirect link
Wyse, Adam E. – Applied Psychological Measurement, 2011
In many practical testing situations, alternate test forms from the same testing program are not strictly parallel to each other and instead the test forms exhibit small psychometric differences. This article investigates the potential practical impact that these small psychometric differences can have on expected classification accuracy. Ten…
Descriptors: Test Format, Test Construction, Testing Programs, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Belov, Dmitry I.; Armstrong, Ronald D. – Applied Psychological Measurement, 2008
This article presents an application of Monte Carlo methods for developing and assembling multistage adaptive tests (MSTs). A major advantage of the Monte Carlo assembly over other approaches (e.g., integer programming or enumerative heuristics) is that it provides a uniform sampling from all MSTs (or MST paths) available from a given item pool.…
Descriptors: Monte Carlo Methods, Adaptive Testing, Sampling, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Hol, A. Michiel; Vorst, Harrie C. M.; Mellenbergh, Gideon J. – Applied Psychological Measurement, 2007
In a randomized experiment (n = 515), a computerized and a computerized adaptive test (CAT) are compared. The item pool consists of 24 polytomous motivation items. Although items are carefully selected, calibration data show that Samejima's graded response model did not fit the data optimally. A simulation study is done to assess possible…
Descriptors: Student Motivation, Simulation, Adaptive Testing, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
van der Linden, Wim J. – Applied Psychological Measurement, 2006
Two local methods for observed-score equating are applied to the problem of equating an adaptive test to a linear test. In an empirical study, the methods were evaluated against a method based on the test characteristic function (TCF) of the linear test and traditional equipercentile equating applied to the ability estimates on the adaptive test…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Format, Equated Scores
Peer reviewed Peer reviewed
Henly, Susan J.; And Others – Applied Psychological Measurement, 1989
A group of covariance structure models was examined to ascertain the similarity between conventionally administered and computerized adaptive versions of the Differential Aptitude Test (DAT). Results for 332 students indicate that the computerized version of the DAT is an adequate representation of the conventional test battery. (TJH)
Descriptors: Ability Identification, Adaptive Testing, Comparative Testing, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Armstrong, Ronald D.; Jones, Douglas H.; Koppel, Nicole B.; Pashley, Peter J. – Applied Psychological Measurement, 2004
A multiple-form structure (MFS) is an ordered collection or network of testlets (i.e., sets of items). An examinee's progression through the network of testlets is dictated by the correctness of an examinee's answers, thereby adapting the test to his or her trait level. The collection of paths through the network yields the set of all possible…
Descriptors: Law Schools, Adaptive Testing, Computer Assisted Testing, Test Format
Peer reviewed Peer reviewed
Neuman, George; Baydoun, Ramzi – Applied Psychological Measurement, 1998
Studied the cross-mode equivalence of paper-and-pencil and computer-based clerical tests with 141 undergraduates. Found no differences across modes for the two types of tests. Differences can be minimized when speeded computerized tests follow the same administration and response procedures as the paper format. (SLD)
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Higher Education
Peer reviewed Peer reviewed
And Others; Mann, Irene T. – Applied Psychological Measurement, 1979
Several methodological problems (particularly the assumed bipolarity of scales, instructions regarding use of the midpoint, and concept-scale interaction) which may contribute to a lack of precision in the semantic differential technique were investigated. Results generally supported the use of the semantic differential. (Author/JKS)
Descriptors: Analysis of Variance, Computer Assisted Testing, Higher Education, Rating Scales
Peer reviewed Peer reviewed
Harris, Deborah J. – Applied Psychological Measurement, 1991
Effects of passage and item-scrambling on equipercentile and item-response theory equating were investigated using 2 scrambled versions of the American College Testing Program Assessment for approximately 25,000 examinees. Results indicate that using a base-form conversion table with a scrambled form affects the individual examinee level. (SLD)
Descriptors: College Entrance Examinations, Comparative Testing, Context Effect, Equated Scores
Peer reviewed Peer reviewed
Berger, Martijn P. F. – Applied Psychological Measurement, 1994
This paper focuses on similarities of optimal design of fixed-form tests, adaptive tests, and testlets within the framework of the general theory of optimal designs. A sequential design procedure is proposed that uses these similarities to obtain consistent estimates for the trait level distribution. (SLD)
Descriptors: Achievement Tests, Adaptive Testing, Algorithms, Estimation (Mathematics)
Peer reviewed Peer reviewed
Sykes, Robert C.; Ito, Kyoko – Applied Psychological Measurement, 1997
Evaluated the equivalence of scores and one-parameter logistic model item difficulty estimates obtained from computer-based and paper-and-pencil forms of a licensure examination taken by 418 examinees. There was no effect of either order or mode of administration on the equivalences. (SLD)
Descriptors: Computer Assisted Testing, Estimation (Mathematics), Health Personnel, Item Response Theory
Peer reviewed Peer reviewed
Norcini, John; And Others – Applied Psychological Measurement, 1991
Effects of numbers of experts (NOEs) and common items (CIs) on the scaling of cutting scores from expert judgments were studied for 11,917 physicians taking 2 forms of a medical specialty examination. Increasing NOEs and CIs reduced error; beyond 5 experts and 25 CIs, error differences were small. (SLD)
Descriptors: Comparative Testing, Cutting Scores, Equated Scores, Estimation (Mathematics)
Peer reviewed Peer reviewed
Direct linkDirect link
Yao, Lihua; Schwarz, Richard D. – Applied Psychological Measurement, 2006
Multidimensional item response theory (IRT) models have been proposed for better understanding the dimensional structure of data or to define diagnostic profiles of student learning. A compensatory multidimensional two-parameter partial credit model (M-2PPC) for constructed-response items is presented that is a generalization of those proposed to…
Descriptors: Models, Item Response Theory, Markov Processes, Monte Carlo Methods
Peer reviewed Peer reviewed
Barnes, Janet L.; Landy, Frank J. – Applied Psychological Measurement, 1979
Although behaviorally anchored rating scales have both intuitive and empirical appeal, they have not always yielded superior results in contrast with graphic rating scales. Results indicate that the choice of an anchoring procedure will depend on the nature of the actual rating process. (Author/JKS)
Descriptors: Behavior Rating Scales, Comparative Testing, Higher Education, Rating Scales
Previous Page | Next Page ยป
Pages: 1  |  2