NotesFAQContact Us
Collection
Advanced
Search Tips
Education Level
Location
Netherlands1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 25 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Pan, Yiqin; Livne, Oren; Wollack, James A.; Sinharay, Sandip – Educational Measurement: Issues and Practice, 2023
In computerized adaptive testing, overexposure of items in the bank is a serious problem and might result in item compromise. We develop an item selection algorithm that utilizes the entire bank well and reduces the overexposure of items. The algorithm is based on collaborative filtering and selects an item in two stages. In the first stage, a set…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Algorithms
Peer reviewed Peer reviewed
Direct linkDirect link
Tan, Qingrong; Cai, Yan; Luo, Fen; Tu, Dongbo – Journal of Educational and Behavioral Statistics, 2023
To improve the calibration accuracy and calibration efficiency of cognitive diagnostic computerized adaptive testing (CD-CAT) for new items and, ultimately, contribute to the widespread application of CD-CAT in practice, the current article proposed a Gini-based online calibration method that can simultaneously calibrate the Q-matrix and item…
Descriptors: Cognitive Tests, Computer Assisted Testing, Adaptive Testing, Accuracy
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hanif Akhtar – International Society for Technology, Education, and Science, 2023
For efficiency, Computerized Adaptive Test (CAT) algorithm selects items with the maximum information, typically with a 50% probability of being answered correctly. However, examinees may not be satisfied if they only correctly answer 50% of the items. Researchers discovered that changing the item selection algorithms to choose easier items (i.e.,…
Descriptors: Success, Probability, Computer Assisted Testing, Adaptive Testing
Peer reviewed Peer reviewed
Adema, Jos J.; van der Linden, Wim J. – Journal of Educational Statistics, 1989
Two zero-one linear programing models for constructing tests using classical item and test parameters are given. These models are useful, for instance, when classical test theory must serve as an interface between an item response theory-based item banking system and a test constructor unfamiliar with the underlying theory. (TJH)
Descriptors: Algorithms, Computer Assisted Testing, Item Banks, Linear Programing
Peer reviewed Peer reviewed
Stocking, Martha L.; Lewis, Charles – Journal of Educational and Behavioral Statistics, 1998
Ensuring item and pool security in a continuous testing environment is explored through a new method of controlling exposure rate of items conditional on ability level in computerized testing. Properties of this conditional control on exposure rate, when used in conjunction with a particular adaptive testing algorithm, are explored using simulated…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Difficulty Level
Peer reviewed Peer reviewed
van der Linden, Wim J. – Applied Psychological Measurement, 2001
Presents a constrained computerized adaptive testing (CAT) algorithm that can be used to equate CAT number-correct scores to a reference test. Used an item bank from the Law School Admission Test to compare results of the algorithm with those for equipercentile observed-score equating. Discusses advantages of the approach. (SLD)
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Equated Scores
Chang, Shun-Wen; Ansley, Timothy N.; Lin, Sieh-Hwa – 2000
This study examined the effectiveness of the Sympson and Hetter conditional procedure (SHC), a modification of the Sympson and Hetter (1985) algorithm, in controlling the exposure rates of items in a computerized adaptive testing (CAT) environment. The properties of the procedure were compared with those of the Davey and Parshall (1995) and the…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Item Banks
Lau, C. Allen; Wang, Tianyou – 1999
A study was conducted to extend the sequential probability ratio testing (SPRT) procedure with the polytomous model under some practical constraints in computerized classification testing (CCT), such as methods to control item exposure rate, and to study the effects of other variables, including item information algorithms, test difficulties, item…
Descriptors: Algorithms, Computer Assisted Testing, Difficulty Level, Item Banks
Stocking, Martha L.; And Others – 1991
A previously developed method of automatically selecting items for inclusion in a test subject to constraints on item content and statistical properties is applied to real data. Two tests are first assembled by experts in test construction who normally assemble such tests on a routine basis. Using the same pool of items and constraints articulated…
Descriptors: Algorithms, Automation, Coding, Computer Assisted Testing
Peer reviewed Peer reviewed
van der Linden, Wim J.; Scrams, David J.; Schnipke, Deborah L. – Applied Psychological Measurement, 1999
Proposes an item-selection algorithm for neutralizing the differential effects of time limits on computerized adaptive test scores. Uses a statistical model for distributions of examinees' response times on items in a bank that is updated each time an item is administered. Demonstrates the method using an item bank from the Armed Services…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Item Banks
Linacre, John Michael – 1988
Computer-adaptive testing (CAT) allows improved security, greater scoring accuracy, shorter testing periods, quicker availability of results, and reduced guessing and other undesirable test behavior. Simple approaches can be applied by the classroom teacher, or other content specialist, who possesses simple computer equipment and elementary…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Cutting Scores
PDF pending restoration PDF pending restoration
Boekkooi-Timminga, Ellen – 1986
Nine methods for automated test construction are described. All are based on the concepts of information from item response theory. Two general kinds of methods for the construction of parallel tests are presented: (1) sequential test design; and (2) simultaneous test design. Sequential design implies that the tests are constructed one after the…
Descriptors: Algorithms, Computer Assisted Testing, Foreign Countries, Item Banks
Gershon, Richard; Bergstrom, Betty – 1995
When examinees are allowed to review responses on an adaptive test, can they "cheat" the adaptive algorithm in order to take an easier test and improve their performance? Theoretically, deliberately answering items incorrectly will lower the examinee ability estimate and easy test items will be administered. If review is then allowed,…
Descriptors: Adaptive Testing, Algorithms, Cheating, Computer Assisted Testing
Peer reviewed Peer reviewed
Armstrong, R. D.; And Others – Applied Psychological Measurement, 1996
When the network-flow algorithm (NFA) and the average growth approximation algorithm (AGAA) were used for automated test assembly with American College Test and Armed Services Vocational Aptitude Battery item banks, results indicate that reasonable error in item parameters is not harmful for test assembly using NFA or AGAA. (SLD)
Descriptors: Algorithms, Aptitude Tests, College Entrance Examinations, Computer Assisted Testing
Peer reviewed Peer reviewed
Armstrong, Ronald D.; And Others – Psychometrika, 1992
A method is presented and illustrated for simultaneously generating multiple tests with similar characteristics from the item bank by using binary programing techniques. The parallel tests are created to match an existing seed test item for item and to match user-supplied taxonomic specifications. (SLD)
Descriptors: Algorithms, Arithmetic, Computer Assisted Testing, Equations (Mathematics)
Previous Page | Next Page ยป
Pages: 1  |  2