NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 841 to 855 of 1,333 results Save | Export
Peer reviewed Peer reviewed
Rocklin, Thomas; O'Donnell, Angela M. – Journal of Educational Psychology, 1987
An experiment was conducted that contrasted a variant of computerized adaptive testing, self-adapted testing, with two traditional tests. Participants completed a self-report of text anxiety and were randomly assigned to take one of the three tests of verbal ability. Subjects generally chose more difficult items as the test progressed. (Author/LMO)
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Difficulty Level
Giraud, Gerald; Smith, Russel – Online Submission, 2005
This study examines the effect of item response time across 30 items on ability estimates in a high stakes computer adaptive graduate admissions examination. Examinees were categorized according to 4 item response time patterns, and the categories are compared in terms of ability estimates. Significant differences between response time patterns…
Descriptors: Reaction Time, Test Items, Time Management, Adaptive Testing
Capar, Nilufer K.; Thompson, Tony; Davey, Tim – 2000
Information provided for computerized adaptive test (CAT) simulees was compared under two conditions on two moderately correlated trait composites, mathematics and reading comprehension. The first condition used information provided by in-scale items alone, while the second condition used information provided by in- and out-of-scale items together…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Item Response Theory
Meijer, Rob R. – 2001
Recent developments of person-fit analysis in computerized adaptive testing (CAT) are discussed. Methods from statistical process control are presented that have been proposed to classify an item score pattern as fitting or misfitting the underlying item response theory (IRT) model in a CAT. Most person-fit research in CAT is restricted to…
Descriptors: Adaptive Testing, Certification, Computer Assisted Testing, High Stakes Tests
Nandakumar, Ratna; Roussos, Louis – 2001
Computerized adaptive tests (CATs) pose major obstacles to the traditional assessment of differential item functioning (DIF). This paper proposes a modification of the SIBTEST DIF procedure for CATs, called CATSIB. CATSIB matches test takers on estimated ability based on unidimensional item response theory. To control for impact-induced Type I…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Identification
Shermis, Mark D.; Mzumara, Howard; Brown, Mike; Lillig, Clo – 1997
An important problem facing institutions of higher education is the number of students reporting that they are not adequately prepared for the difficulty of college-level courses. To meet this problem, a computerized adaptive testing package was developed that permitted remote placement testing of high school students via the World Wide Web. The…
Descriptors: Adaptive Testing, Adolescents, Computer Assisted Testing, High Schools
Thomas, William R. – 2003
This report, based on a survey completed by testing directors in states that are members of the Southern Regional Education Board (SREB), describes the status of online testing in SREB states. Overall, SREB states are paying limited attention to online testing. Only Virginia is moving systematically to implement online testing. By spring 2004, all…
Descriptors: Adaptive Testing, Computer Assisted Testing, Online Systems, Standardized Tests
Zhu, Renbang; Yu, Feng; Liu, Su – 2002
A computerized adaptive test (CAT) administration usually requires a large supply of items with accurately estimated psychometric properties, such as item response theory (IRT) parameter estimates, to ensure the precision of examinee ability estimation. However, an estimated IRT model of a given item in any given pool does not always correctly…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics)
Glas, Cees A. W.; Vos, Hans J. – 2000
This paper focuses on a version of sequential mastery testing (i.e., classifying students as a master/nonmaster or continuing testing and administering another item or testlet) in which response behavior is modeled by a multidimensional item response theory (IRT) model. First, a general theoretical framework is outlined that is based on a…
Descriptors: Adaptive Testing, Bayesian Statistics, Classification, Computer Assisted Testing
Drasgow, Fritz, Ed.; Olson-Buchanan, Julie B., Ed. – 1999
Chapters in this book present the challenges and dilemmas faced by researchers as they created new computerized assessments, focusing on issues addressed in developing, scoring, and administering the assessments. Chapters are: (1) "Beyond Bells and Whistles; An Introduction to Computerized Assessment" (Julie B. Olson-Buchanan and Fritz Drasgow);…
Descriptors: Adaptive Testing, Computer Assisted Testing, Elementary Secondary Education, Scoring
Peer reviewed Peer reviewed
Cliff, Norman – Psychometrika, 1979
This paper traces the course of the consequences of viewing test responses as simply providing dichotomous data concerning ordinal relations. It begins by proposing that the score matrix is best considered to be items-plus-persons by items-plus-persons, and recording the wrongs as well as the rights. (Author/CTM)
Descriptors: Adaptive Testing, Mathematical Models, Matrices, Measurement
Peer reviewed Peer reviewed
Cliff, Norman; And Others – Applied Psychological Measurement, 1979
Monte Carlo research with TAILOR, a program using implied orders as a basis for tailored testing, is reported. TAILOR typically required about half the available items to estimate, for each simulated examinee, the responses on the remainder. (Author/CTM)
Descriptors: Adaptive Testing, Computer Programs, Item Sampling, Nonparametric Statistics
Peer reviewed Peer reviewed
Cudeck, Robert; And Others – Applied Psychological Measurement, 1979
TAILOR, a computer program which implements an approach to tailored testing, was examined by Monte Carlo methods. The evaluation showed the procedure to be highly reliable and capable of reducing the required number of tests items by about one half. (Author/JKS)
Descriptors: Adaptive Testing, Computer Programs, Feasibility Studies, Item Analysis
Peer reviewed Peer reviewed
Berger, Martijn P. F.; Veerkamp, Wim J. J. – Journal of Educational and Behavioral Statistics, 1997
Some alternative criteria for item selection in adaptive testing are proposed that take into account uncertainty in the ability estimates. A simulation study shows that the likelihood weighted information criterion is a good alternative to the maximum information criterion. Another good alternative uses a Bayesian expected a posteriori estimator.…
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing
Peer reviewed Peer reviewed
Chang, Hua-Hua; Ying, Zhiliang – Applied Psychological Measurement, 1996
An item selection procedure for computerized adaptive testing based on average global information is proposed. Results from simulation studies comparing the approach with the usual maximum item information item selection indicate that the new method leads to improvement in terms of bias and mean squared error reduction under many circumstances.…
Descriptors: Adaptive Testing, Computer Assisted Testing, Error of Measurement, Item Response Theory
Pages: 1  |  ...  |  53  |  54  |  55  |  56  |  57  |  58  |  59  |  60  |  61  |  ...  |  89