NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 571 to 585 of 1,057 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Costagliola, Gennaro; Fuccella, Vittorio – International Journal of Distance Education Technologies, 2009
To correctly evaluate learners' knowledge, it is important to administer tests composed of good quality question items. By the term "quality" we intend the potential of an item in effectively discriminating between skilled and untrained students and in obtaining tutor's desired difficulty level. This article presents a rule-based e-testing system…
Descriptors: Difficulty Level, Test Items, Computer Assisted Testing, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Abedi, Jamal – Educational Assessment, 2009
This study compared performance of both English language learners (ELLs) and non-ELL students in Grades 4 and 8 under accommodated and nonaccommodated testing conditions. The accommodations used in this study included a computerized administration of a math test with a pop-up glossary, a customized English dictionary, extra testing time, and…
Descriptors: Computer Assisted Testing, Testing Accommodations, Mathematics Tests, Grade 4
Peer reviewed Peer reviewed
Direct linkDirect link
Belov, Dmitry I.; Armstrong, Ronald D.; Weissman, Alexander – Applied Psychological Measurement, 2008
This article presents a new algorithm for computerized adaptive testing (CAT) when content constraints are present. The algorithm is based on shadow CAT methodology to meet content constraints but applies Monte Carlo methods and provides the following advantages over shadow CAT: (a) lower maximum item exposure rates, (b) higher utilization of the…
Descriptors: Test Items, Monte Carlo Methods, Law Schools, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Passos, Valeria Lima; Berger, Martijn P. F.; Tan, Frans E. – Applied Psychological Measurement, 2007
The early stage of computerized adaptive testing (CAT) refers to the phase of the trait estimation during the administration of only a few items. This phase can be characterized by bias and instability of estimation. In this study, an item selection criterion is introduced in an attempt to lessen this instability: the D-optimality criterion. A…
Descriptors: Test Construction, Test Items, Item Response Theory, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Cheng, Ying; Chang, Hua-Hua; Yi, Qing – Applied Psychological Measurement, 2007
Content balancing is an important issue in the design and implementation of computerized adaptive testing (CAT). Content-balancing techniques that have been applied in fixed content balancing, where the number of items from each content area is fixed, include constrained CAT (CCAT), the modified multinomial model (MMM), modified constrained CAT…
Descriptors: Adaptive Testing, Item Analysis, Computer Assisted Testing, Item Response Theory
Nering, Michael L., Ed.; Ostini, Remo, Ed. – Routledge, Taylor & Francis Group, 2010
This comprehensive "Handbook" focuses on the most used polytomous item response theory (IRT) models. These models help us understand the interaction between examinees and test questions where the questions have various response categories. The book reviews all of the major models and includes discussions about how and where the models…
Descriptors: Guides, Item Response Theory, Test Items, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Sawaki, Yasuyo; Kim, Hae-Jin; Gentile, Claudia – Language Assessment Quarterly, 2009
In cognitive diagnosis a Q-matrix (Tatsuoka, 1983, 1990), which is an incidence matrix that defines the relationships between test items and constructs of interest, has great impact on the nature of performance feedback that can be provided to score users. The purpose of the present study was to identify meaningful skill coding categories that…
Descriptors: Feedback (Response), Test Items, Test Content, Identification
Boyd, Aimee M.; Dodd, Barbara G.; Fitzpatrick, Steven J. – 2003
This study compared several item exposure control procedures for computerized adaptive test (CAT) systems based on a three-parameter logistic testlet response theory model (X. Wang, E. Bradlow, and H. Wainer, 2002) and G. Masters' (1982) partial credit model using real data from the Verbal Reasoning section of the Medical College Admission Test.…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Items
Peer reviewed Peer reviewed
Chang, Shun-Wen; Ansley, Timothy N. – Journal of Educational Measurement, 2003
Compared the properties of five methods of item exposure control in the context of estimating examinees' abilities in a computerized adaptive testing situation. Findings show advantages to the Stocking and Lewis conditional multinomial procedure (M. Stocking and C. Lewis, 1995) and, to a lesser degree, the Davy and Parshall method (T. Davey and C.…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Yueh-Min; Lin, Yen-Ting; Cheng, Shu-Chen – Computers & Education, 2009
With the rapid growth of computer and mobile technology, it is a challenge to integrate computer based test (CBT) with mobile learning (m-learning) especially for formative assessment and self-assessment. In terms of self-assessment, computer adaptive test (CAT) is a proper way to enable students to evaluate themselves. In CAT, students are…
Descriptors: Self Evaluation (Individuals), Test Items, Formative Evaluation, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Papanastasiou, Elena C.; Reckase, Mark D. – International Journal of Testing, 2007
Because of the increased popularity of computerized adaptive testing (CAT), many admissions tests, as well as certification and licensure examinations, have been transformed from their paper-and-pencil versions to computerized adaptive versions. A major difference between paper-and-pencil tests and CAT from an examinee's point of view is that in…
Descriptors: Simulation, Adaptive Testing, Computer Assisted Testing, Test Items
Peer reviewed Peer reviewed
Roberts, James S.; Lin, Yan; Laughlin, James E. – Applied Psychological Measurement, 2001
Examined the use of the generalized graded unfolding model (GGUM) in computerized adaptive testing, using simulation and attempting to minimize the number of items required to produce equiprecise estimates of person locations. Results suggest that adaptive testing with the GGUM is a good method for achieving estimates with an approximately uniform…
Descriptors: Adaptive Testing, Computer Assisted Testing, Simulation, Test Items
Pommerich, Mary – Journal of Technology, Learning, and Assessment, 2007
Computer administered tests are becoming increasingly prevalent as computer technology becomes more readily available on a large scale. For testing programs that utilize both computer and paper administrations, mode effects are problematic in that they can result in examinee scores that are artificially inflated or deflated. As such, researchers…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Format, Scores
Georgiadou, Elissavet; Triantafillou, Evangelos; Economides, Anastasios A. – Journal of Technology, Learning, and Assessment, 2007
Since researchers acknowledged the several advantages of computerized adaptive testing (CAT) over traditional linear test administration, the issue of item exposure control has received increased attention. Due to CAT's underlying philosophy, particular items in the item pool may be presented too often and become overexposed, while other items are…
Descriptors: Adaptive Testing, Computer Assisted Testing, Scoring, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Penfield, Randall D. – Educational and Psychological Measurement, 2007
The standard error of the maximum likelihood ability estimator is commonly estimated by evaluating the test information function at an examinee's current maximum likelihood estimate (a point estimate) of ability. Because the test information function evaluated at the point estimate may differ from the test information function evaluated at an…
Descriptors: Simulation, Adaptive Testing, Computation, Maximum Likelihood Statistics
Pages: 1  |  ...  |  35  |  36  |  37  |  38  |  39  |  40  |  41  |  42  |  43  |  ...  |  71