Publication Date
| In 2026 | 0 |
| Since 2025 | 225 |
| Since 2022 (last 5 years) | 1358 |
| Since 2017 (last 10 years) | 2816 |
| Since 2007 (last 20 years) | 4806 |
Descriptor
| Computer Assisted Testing | 7203 |
| Foreign Countries | 2049 |
| Test Construction | 1110 |
| Student Evaluation | 1062 |
| Evaluation Methods | 1061 |
| Test Items | 1057 |
| Adaptive Testing | 1052 |
| Educational Technology | 904 |
| Comparative Analysis | 835 |
| Scores | 830 |
| Higher Education | 823 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 182 |
| Researchers | 146 |
| Teachers | 122 |
| Policymakers | 40 |
| Administrators | 36 |
| Students | 15 |
| Counselors | 9 |
| Parents | 4 |
| Media Staff | 3 |
| Support Staff | 3 |
Location
| Australia | 169 |
| United Kingdom | 153 |
| Turkey | 126 |
| China | 117 |
| Germany | 108 |
| Canada | 106 |
| Spain | 94 |
| Taiwan | 89 |
| Netherlands | 73 |
| Iran | 71 |
| United States | 68 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 4 |
| Meets WWC Standards with or without Reservations | 4 |
| Does not meet standards | 5 |
Giraud, Gerald; Smith, Russel – Online Submission, 2005
This study examines the effect of item response time across 30 items on ability estimates in a high stakes computer adaptive graduate admissions examination. Examinees were categorized according to 4 item response time patterns, and the categories are compared in terms of ability estimates. Significant differences between response time patterns…
Descriptors: Reaction Time, Test Items, Time Management, Adaptive Testing
Capar, Nilufer K.; Thompson, Tony; Davey, Tim – 2000
Information provided for computerized adaptive test (CAT) simulees was compared under two conditions on two moderately correlated trait composites, mathematics and reading comprehension. The first condition used information provided by in-scale items alone, while the second condition used information provided by in- and out-of-scale items together…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Item Response Theory
Meijer, Rob R. – 2001
Recent developments of person-fit analysis in computerized adaptive testing (CAT) are discussed. Methods from statistical process control are presented that have been proposed to classify an item score pattern as fitting or misfitting the underlying item response theory (IRT) model in a CAT. Most person-fit research in CAT is restricted to…
Descriptors: Adaptive Testing, Certification, Computer Assisted Testing, High Stakes Tests
Nandakumar, Ratna; Roussos, Louis – 2001
Computerized adaptive tests (CATs) pose major obstacles to the traditional assessment of differential item functioning (DIF). This paper proposes a modification of the SIBTEST DIF procedure for CATs, called CATSIB. CATSIB matches test takers on estimated ability based on unidimensional item response theory. To control for impact-induced Type I…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Identification
Shermis, Mark D.; Mzumara, Howard; Brown, Mike; Lillig, Clo – 1997
An important problem facing institutions of higher education is the number of students reporting that they are not adequately prepared for the difficulty of college-level courses. To meet this problem, a computerized adaptive testing package was developed that permitted remote placement testing of high school students via the World Wide Web. The…
Descriptors: Adaptive Testing, Adolescents, Computer Assisted Testing, High Schools
Zhu, Renbang; Yu, Feng; Liu, Su – 2002
A computerized adaptive test (CAT) administration usually requires a large supply of items with accurately estimated psychometric properties, such as item response theory (IRT) parameter estimates, to ensure the precision of examinee ability estimation. However, an estimated IRT model of a given item in any given pool does not always correctly…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics)
Glas, Cees A. W.; Vos, Hans J. – 2000
This paper focuses on a version of sequential mastery testing (i.e., classifying students as a master/nonmaster or continuing testing and administering another item or testlet) in which response behavior is modeled by a multidimensional item response theory (IRT) model. First, a general theoretical framework is outlined that is based on a…
Descriptors: Adaptive Testing, Bayesian Statistics, Classification, Computer Assisted Testing
Peer reviewedBerger, Martijn P. F.; Veerkamp, Wim J. J. – Journal of Educational and Behavioral Statistics, 1997
Some alternative criteria for item selection in adaptive testing are proposed that take into account uncertainty in the ability estimates. A simulation study shows that the likelihood weighted information criterion is a good alternative to the maximum information criterion. Another good alternative uses a Bayesian expected a posteriori estimator.…
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing
Peer reviewedChang, Hua-Hua; Ying, Zhiliang – Applied Psychological Measurement, 1996
An item selection procedure for computerized adaptive testing based on average global information is proposed. Results from simulation studies comparing the approach with the usual maximum item information item selection indicate that the new method leads to improvement in terms of bias and mean squared error reduction under many circumstances.…
Descriptors: Adaptive Testing, Computer Assisted Testing, Error of Measurement, Item Response Theory
Peer reviewedAdema, Jos J. – Journal of Educational Measurement, 1990
Mixed integer linear programing models for customizing two-stage tests are presented. Model constraints are imposed with respect to test composition, administration time, inter-item dependencies, and other practical considerations. The models can be modified for use in the construction of multistage tests. (Author/TJH)
Descriptors: Adaptive Testing, Computer Assisted Testing, Equations (Mathematics), Linear Programing
Peer reviewedBerger, Steven G.; And Others – Assessment, 1994
As part of a neuropsychological assessment, 95 adult patients completed either standard or computerized versions of the Category Test. Subjects who completed the computerized version exhibited more errors than those who completed the standard version, suggesting that it may be more difficult. (SLD)
Descriptors: Adults, Comparative Analysis, Computer Assisted Testing, Demography
Peer reviewedHetter, Rebecca D.; And Others – Applied Psychological Measurement, 1994
Effects on computerized adaptive test score of using a paper-and-pencil (P&P) calibration to select items and estimate scores were compared with effects of using computer calibration. Results with 2,999 Navy recruits support the use of item parameters calibrated from either P&P or computer administrations. (SLD)
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Estimation (Mathematics)
Peer reviewedPage, Ellis Batten – Journal of Experimental Education, 1994
National Assessment of Educational Progress writing sample essays from 1988 and 1990 (495 and 599 essays) were subjected to computerized grading and human ratings. Cross-validation suggests that computer scoring is superior to a two-judge panel, a finding encouraging for large programs of essay evaluation. (SLD)
Descriptors: Computer Assisted Testing, Computer Software, Essays, Evaluation Methods
Peer reviewedJones, W. Paul – Measurement and Evaluation in Counseling and Development, 1993
Investigated model for reducing time for administration of Myers-Briggs Type Indicator (MBTI) using real-data simulation of Bayesian scaling in computerized adaptive administration. Findings from simulation study using data from 127 undergraduates are strongly supportive of use of Bayesian scaled computerized adaptive administration of MBTI.…
Descriptors: Bayesian Statistics, Classification, College Students, Computer Assisted Testing
Peer reviewedStocking, Martha L.; Swanson, Len – Applied Psychological Measurement, 1993
A method is presented for incorporating a large number of constraints on adaptive item selection in the construction of computerized adaptive tests. The method, which emulates practices of expert test specialists, is illustrated for verbal and quantitative measures. Its foundation is application of a weighted deviations model and algorithm. (SLD)
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Expert Systems


