NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
Location
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 27 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Nesrine Mansouri; Mourad Abed; Makram Soui – Education and Information Technologies, 2024
Selecting undergraduate majors or specializations is a crucial decision for students since it considerably impacts their educational and career paths. Moreover, their decisions should match their academic background, interests, and goals to pursue their passions and discover various career paths with motivation. However, such a decision remains…
Descriptors: Undergraduate Students, Decision Making, Majors (Students), Specialization
Peer reviewed Peer reviewed
Direct linkDirect link
Gorgun, Guher; Bulut, Okan – Large-scale Assessments in Education, 2023
In low-stakes assessment settings, students' performance is not only influenced by students' ability level but also their test-taking engagement. In computerized adaptive tests (CATs), disengaged responses (e.g., rapid guesses) that fail to reflect students' true ability levels may lead to the selection of less informative items and thereby…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Algorithms
Veldkamp, Bernard P. – 2002
This paper discusses optimal test construction, which deals with the selection of items from a pool to construct a test that performs optimally with respect to the objective of the test and simultaneously meets all test specifications. Optimal test construction problems can be formulated as mathematical decision models. Algorithms and heuristics…
Descriptors: Algorithms, Item Banks, Selection, Test Construction
Peer reviewed Peer reviewed
Eggen, T. J. H. M. – Applied Psychological Measurement, 1999
Evaluates a method for item selection in adaptive testing that is based on Kullback-Leibler information (KLI) (T. Cover and J. Thomas, 1991). Simulation study results show that testing algorithms using KLI-based item selection perform better than or as well as those using Fisher information item selection. (SLD)
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Selection
Peer reviewed Peer reviewed
Berger, Martijn P. F. – Journal of Educational Statistics, 1994
Problems in selection of optimal designs in item-response theory (IRT) models are resolved through a sequential design procedure that is a modification of the D-optimality procedure proposed by Wynn (1970). This algorithm leads to consistent estimates, and the errors in selecting the abilities generally do not greatly affect optimality. (SLD)
Descriptors: Ability, Algorithms, Estimation (Mathematics), Item Response Theory
Stocking, Martha L.; And Others – 1991
A previously developed method of automatically selecting items for inclusion in a test subject to constraints on item content and statistical properties is applied to real data. Two tests are first assembled by experts in test construction who normally assemble such tests on a routine basis. Using the same pool of items and constraints articulated…
Descriptors: Algorithms, Automation, Coding, Computer Assisted Testing
Bowles, Ryan; Pommerich, Mary – 2001
Many arguments have been made against allowing examinees to review and change their answers after completing a computer adaptive test (CAT). These arguments include: (1) increased bias; (2) decreased precision; and (3) susceptibility of test-taking strategies. Results of simulations suggest that the strength of these arguments is reduced or…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Review (Reexamination)
Peer reviewed Peer reviewed
Schnipke, Deborah L.; Green, Bert F. – Journal of Educational Measurement, 1995
Two item selection algorithms, one based on maximal differentiation between examinees and one based on item response theory and maximum information for each examinee, were compared in simulated linear and adaptive tests of cognitive ability. Adaptive tests based on maximum information were clearly superior. (SLD)
Descriptors: Adaptive Testing, Algorithms, Comparative Analysis, Item Response Theory
van der Linden, Wim J. – 1997
The case of adaptive testing under a multidimensional logistic response model is addressed. An adaptive algorithm is proposed that minimizes the (asymptotic) variance of the maximum-likelihood (ML) estimator of a linear combination of abilities of interest. The item selection criterion is a simple expression in closed form. In addition, it is…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
van der Linden, Wim J.; Scrams, David J.; Schnipke, Deborah L. – 2003
This paper proposes an item selection algorithm that can be used to neutralize the effect of time limits in computer adaptive testing. The method is based on a statistical model for the response-time distributions of the test takers on the items in the pool that is updated each time a new item has been administered. Predictions from the model are…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Linear Programming
Peer reviewed Peer reviewed
O'Neill, Thomas; Lunz, Mary E.; Thiede, Keith – Journal of Applied Measurement, 2000
Studied item exposure in a computerized adaptive test when the item selection algorithm presents examinees with questions they were asked in a previous test administration. Results with 178 repeat examinees on a medical technologists' test indicate that the combined use of an adaptive algorithm to select items and latent trait theory to estimate…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Item Response Theory
Peer reviewed Peer reviewed
van der Linden, Wim J.; Scrams, David J.; Schnipke, Deborah L. – Applied Psychological Measurement, 1999
Proposes an item-selection algorithm for neutralizing the differential effects of time limits on computerized adaptive test scores. Uses a statistical model for distributions of examinees' response times on items in a bank that is updated each time an item is administered. Demonstrates the method using an item bank from the Armed Services…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Item Banks
Hu, Ming-xiu; Salvucci, Sameena – 2001
Many imputation techniques and imputation software packages have been developed over the years to deal with missing data. Different methods may work well under different circumstances, and it is advisable to conduct a sensitivity analysis when choosing an imputation method for a particular survey. This study reviewed about 30 imputation methods…
Descriptors: Algorithms, Computer Simulation, Data Analysis, Longitudinal Studies
Peer reviewed Peer reviewed
Stocking, Martha L.; And Others – Applied Psychological Measurement, 1993
A method of automatically selecting items for inclusion in a test with constraints on item content and statistical properties was applied to real data. Tests constructed manually from the same data and constraints were compared to tests constructed automatically. Results show areas in which automated assembly can improve test construction. (SLD)
Descriptors: Algorithms, Automation, Comparative Testing, Computer Assisted Testing
Davey, Tim; Parshall, Cynthia G. – 1995
Although computerized adaptive tests acquire their efficiency by successively selecting items that provide optimal measurement at each examinee's estimated level of ability, operational testing programs will typically consider additional factors in item selection. In practice, items are generally selected with regard to at least three, often…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
Previous Page | Next Page ยป
Pages: 1  |  2