NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 616 to 630 of 1,057 results Save | Export
Peer reviewed Peer reviewed
Stocking, Martha L.; Ward, William C.; Potenza, Maria T. – Journal of Educational Measurement, 1998
Explored, using simulations, the use of disclosed items on continuous testing conditions under a worse-case scenario that assumes that disclosed items are always answered correctly. Some item pool and test designs were identified in which the use of disclosed items produces effects on test scores that may be viewed as negligible. (Author/MAK)
Descriptors: Adaptive Testing, Cheating, Computer Assisted Testing, Item Banks
Peer reviewed Peer reviewed
van der Linden, Wim J.; Glas, Cees A. W. – Applied Measurement in Education, 2000
Performed a simulation study to demonstrate the dramatic impact of capitalization on estimation errors on ability estimation in adaptive testing. Discusses four different strategies to minimize the likelihood of capitalization in computerized adaptive testing. (SLD)
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics)
Peer reviewed Peer reviewed
Chang, Hua-Hua; Ying, Zhiliang – Applied Psychological Measurement, 1999
Proposes a new multistage adaptive-testing procedure that factors the discrimination parameter (alpha) into the item-selection process. Simulation studies indicate that the new strategy results in tests that are well-balanced, with respect to item exposure, and efficient. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Selection
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Shu-Ying; Lei, Pui-Wa – Applied Psychological Measurement, 2005
This article proposes an item exposure control method, which is the extension of the Sympson and Hetter procedure and can provide item exposure control at both the item and test levels. Item exposure rate and test overlap rate are two indices commonly used to track item exposure in computerized adaptive tests. By considering both indices, item…
Descriptors: Computer Assisted Testing, Test Items, Computer Simulation, Evaluation Criteria
Peer reviewed Peer reviewed
Direct linkDirect link
Van Rijn, P. W.; Eggen, T. J. H. M.; Hemker, B. T.; Sanders, P. F. – Applied Psychological Measurement, 2002
In the present study, a procedure that has been used to select dichotomous items in computerized adaptive testing was applied to polytomous items. This procedure was designed to select the item with maximum weighted information. In a simulation study, the item information function was integrated over a fixed interval of ability values and the item…
Descriptors: Intervals, Simulation, Adaptive Testing, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
van der Linden, Wim J.; Breithaupt, Krista; Chuah, Siang Chee; Zhang, Yanwei – Journal of Educational Measurement, 2007
A potential undesirable effect of multistage testing is differential speededness, which happens if some of the test takers run out of time because they receive subtests with items that are more time intensive than others. This article shows how a probabilistic response-time model can be used for estimating differences in time intensities and speed…
Descriptors: Adaptive Testing, Evaluation Methods, Test Items, Reaction Time
Peer reviewed Peer reviewed
Direct linkDirect link
Zabaleta, Francisco – CALICO Journal, 2007
Placing students of a foreign language within a basic language program constitutes an ongoing problem, particularly for large university departments when they have many incoming freshmen and transfer students. This article outlines the author's experience designing and piloting a language placement test for a university level Spanish program. The…
Descriptors: Test Items, Student Placement, Spanish, Transfer Students
Peer reviewed Peer reviewed
Direct linkDirect link
Gvozdenko, Eugene; Chambers, Dianne – Australasian Journal of Educational Technology, 2007
This paper investigates how monitoring the time spent on a question in a test of basic mathematics skills can provide insights into learning processes, the quality of test takers' knowledge, and cognitive demands and performance of test items that otherwise would remain undiscovered if the usual test outcome of accuracy only format…
Descriptors: Reaction Time, Computer Assisted Testing, Mathematics Tests, Test Items
Stocking, Martha L. – 1993
In the context of paper and pencil testing, the frequency of the exposure of items is usually controlled through policies that regulate both the reuse of test forms and the frequency with which a candidate may retake the test. In the context of computerized adaptive testing, where item pools are large and expensive to produce and testing can be on…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Models
Peer reviewed Peer reviewed
Hoyt, Kenneth B. – Journal of Counseling & Development, 1986
The microcomputer version of the Ohio Vocational Interest Survey (OVIS II) differs from the machine-scored version in its ability to incorporate data from the OVIS II:Career Planner in its printed report. It differs from the hand-scored version in its ability to include data from the OVIS II:Work Characteristic Analysis in its printed report.…
Descriptors: Comparative Analysis, Computer Assisted Testing, Microcomputers, Test Format
Mizokawa, Donald T.; Hamlin, Michael D. – Educational Technology, 1984
Suggestions for software design in computer managed testing (CMT) cover instructions to testees, their physical format, provision of practice items, and time limit information; test item presentation, physical format, discussion of task demands, review capabilities, and rate of presentation; pedagogically helpful utilities; typefonts; vocabulary;…
Descriptors: Computer Assisted Testing, Decision Making, Guidelines, Test Construction
Wen, Jian-Bing; Chang, Hua-Hua; Hau, Kit-Tai – 2000
Test security has often been a problem in computerized adaptive testing (CAT) because the traditional wisdom of item selection overly exposes high discrimination items. The a-stratified (STR) design advocated by H. Chang and his collaborators, which uses items of less discrimination in earlier stages of testing, has been shown to be very…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics)
Hau, Kit-Tai; Wen, Jian-Bing; Chang, Hua-Hua – 2002
In the a-stratified method, a popular and efficient item exposure control strategy proposed by H. Chang (H. Chang and Z. Ying, 1999; K. Hau and H. Chang, 2001) for computerized adaptive testing (CAT), the item pool and item selection process has usually been divided into four strata and the corresponding four stages. In a series of simulation…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics)
Kalohn, John C.; Spray, Judith A. – 2000
A client of American College Testing, Inc. (ACT) decided to implement a computer-based testing program to replace their paper-pencil format for professional certification. This paper reports on the results of the developed test after 1 year's use, especially as the results relate to test security issues. ACT research shows that a variable length…
Descriptors: Certification, Classification, Computer Assisted Testing, Licensing Examinations (Professions)
Krass, Iosif A.; Thomasson, Gary L. – 1999
New items are being calibrated for the next generation of the computerized adaptive (CAT) version of the Armed Services Vocational Aptitude Battery (ASVAB) (Forms 5 and 6). The requirements that the items be "good" three-parameter logistic (3-PL) model items and typically "like" items in the previous CAT-ASVAB tests have…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Nonparametric Statistics
Pages: 1  |  ...  |  38  |  39  |  40  |  41  |  42  |  43  |  44  |  45  |  46  |  ...  |  71