NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 781 to 795 of 1,057 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Levy, Roy; Mislevy, Robert J. – International Journal of Testing, 2004
The challenges of modeling students' performance in computer-based interactive assessments include accounting for multiple aspects of knowledge and skill that arise in different situations and the conditional dependencies among multiple aspects of performance. This article describes a Bayesian approach to modeling and estimating cognitive models…
Descriptors: Computer Assisted Testing, Markov Processes, Computer Networks, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Marks, Anthony M.; Cronje, Johannes C. – Educational Technology & Society, 2008
Computer-based assessments are becoming more commonplace, perhaps as a necessity for faculty to cope with large class sizes. These tests often occur in large computer testing venues in which test security may be compromised. In an attempt to limit the likelihood of cheating in such venues, randomised presentation of items is automatically…
Descriptors: Educational Assessment, Educational Testing, Research Needs, Test Items
Carlson, Sybil B.; Ward, William C. – 1988
Issues concerning the cost and feasibility of using Formulating Hypotheses (FH) test item types for the Graduate Record Examinations have slowed research into their use. This project focused on two major issues that need to be addressed in considering FH items for operational use: the costs of scoring and the assignment of scores along a range of…
Descriptors: Adaptive Testing, Computer Assisted Testing, Costs, Pilot Projects
Wainer, Howard; And Others – 1991
A series of computer simulations was run to measure the relationship between testlet validity and the factors of item pool size and testlet length for both adaptive and linearly constructed testlets. Results confirmed the generality of earlier empirical findings of H. Wainer and others (1991) that making a testlet adaptive yields only marginal…
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Simulation, Item Banks
Scrams, David J.; Schnipke, Deborah L. – 1997
Response accuracy and response speed provide separate measures of performance. Psychometricians have tended to focus on accuracy with the goal of characterizing examinees on the basis of their ability to respond correctly to items from a given content domain. With the advent of computerized testing, response times can now be recorded unobtrusively…
Descriptors: Computer Assisted Testing, Difficulty Level, Item Response Theory, Psychometrics
Wise, Steven L.; And Others – 1997
The degree to which item review on a computerized adaptive test (CAT) could be used by examinees to inflate their scores artificially was studied. G. G. Kingsbury (1996) described a strategy in which examinees could use the changes in item difficulty during a CAT to determine which items' answers are incorrect and should be changed during item…
Descriptors: Achievement Gains, Adaptive Testing, College Students, Computer Assisted Testing
PDF pending restoration PDF pending restoration
Faggen, Jane; And Others – 1995
The objective of this study was to determine the degree to which recommendations for passing scores, calculated on the basis of a traditional standard-setting methodology, might be affected by the mode (paper versus computer-screen prints) in which test items were presented to standard setting panelists. Results were based on the judgments of 31…
Descriptors: Computer Assisted Testing, Cutting Scores, Difficulty Level, Evaluators
Crehan, Kevin D.; Haladyna, Thomas M. – 1994
More attention is currently being paid to the distractors of a multiple-choice test item (Thissen, Steinberg, and Fitzpatrick, 1989). A systematic relationship exists between the keyed response and distractors in multiple-choice items (Levine and Drasgow, 1983). New scoring methods have been introduced, computer programs developed, and research…
Descriptors: Comparative Analysis, Computer Assisted Testing, Distractors (Tests), Models
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Robin, Frédéric; van der Linden, Wim J.; Eignor, Daniel R.; Steffen, Manfred; Stocking, Martha L. – ETS Research Report Series, 2005
The relatively new shadow test approach (STA) to computerized adaptive testing (CAT) proposed by Wim van der Linden is a potentially attractive alternative to the weighted deviation algorithm (WDA) implemented at ETS. However, it has not been evaluated under testing conditions representative of current ETS testing programs. Of interest was whether…
Descriptors: Test Construction, Computer Assisted Testing, Simulation, Evaluation Methods
Samejima, Fumiko – 1990
A method is proposed that increases the accuracies of estimation of the operating characteristics of discrete item responses, especially when the true operating characteristic is represented by a steep curve, and also at the lower and upper ends of the ability distribution where the estimation tends to be inaccurate because of the smaller number…
Descriptors: Ability Identification, Adaptive Testing, Computer Assisted Testing, Equations (Mathematics)
Peer reviewed Peer reviewed
Vale, C. David; Gialluca, Kathleen A. – Applied Psychological Measurement, 1988
To determine which produced the most accurate item parameter estimates, four methods of item response theory were evaluated: (1) heuristic estimates; (2) the ANCILLES program; (3) the LOGIST program; and (4) the ASCAL program. LOGIST and ASCAL produced estimates of superior and essentially equivalent accuracy. (SLD)
Descriptors: Comparative Analysis, Computer Assisted Testing, Computer Software, Estimation (Mathematics)
PDF pending restoration PDF pending restoration
Capar, Nilufer K. – 2000
This study investigated specific conditions under which out-of-scale information improves measurement precision and the factors that influence the degree of reliability gains and the amount of bias induced in the reported scores when out-of-scale information is used. In-scale information is information that an item provides for a composite trait…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Item Response Theory
Eignor, Daniel R. – 1993
Procedures used to establish the comparability of scores derived from the College Board Admissions Testing Program (ATP) computer adaptive Scholastic Aptitude Test (SAT) prototype and the paper-and-pencil SAT are described in this report. Both the prototype, which is made up of Verbal and Mathematics computer adaptive tests (CATs), and a form of…
Descriptors: Adaptive Testing, College Entrance Examinations, Comparative Analysis, Computer Assisted Testing
Ban, Jae-Chun; Hanson, Bradley A.; Wang, Tianyou; Yi, Qing; Harris, Deborah J. – 2000
The purpose of this study was to compare and evaluate five online pretest item calibration/scaling methods in computerized adaptive testing (CAT): (1) the marginal maximum likelihood estimate with one-EM cycle (OEM); (2) the marginal maximum likelihood estimate with multiple EM cycles (MEM); (3) Stocking's Method A (M. Stocking, 1988); (4)…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Estimation (Mathematics)
Finney, Sara J.; Smith, Russell W.; Wise, Steven L. – 1999
Two operational item pools were used to investigate the performance of stratum computerized adaptive tests (CATs) when items were assigned to strata based on empirical estimates of item difficulty or human judgments of item difficulty. Items from the first data set consisted of 54 5-option multiple choice items from a form of the ACT mathematics…
Descriptors: Adaptive Testing, Classification, Computer Assisted Testing, High School Students
Pages: 1  |  ...  |  49  |  50  |  51  |  52  |  53  |  54  |  55  |  56  |  57  |  ...  |  71