Publication Date
| In 2026 | 0 |
| Since 2025 | 25 |
| Since 2022 (last 5 years) | 121 |
| Since 2017 (last 10 years) | 250 |
| Since 2007 (last 20 years) | 576 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Researchers | 38 |
| Practitioners | 25 |
| Teachers | 8 |
| Administrators | 6 |
| Counselors | 3 |
| Policymakers | 2 |
| Parents | 1 |
| Students | 1 |
Location
| Taiwan | 12 |
| United Kingdom | 10 |
| Australia | 9 |
| Netherlands | 9 |
| California | 8 |
| New York | 8 |
| Turkey | 8 |
| Germany | 7 |
| Canada | 6 |
| Florida | 6 |
| Japan | 6 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Peer reviewedLunz, Mary E.; And Others – Applied Psychological Measurement, 1992
The effects of reviewing items and altering responses on the efficiency of computerized adaptive tests and resultant ability estimates of the examinees were explored for medical technology students (220 students could and 492 students could not review and alter their responses). Data do not support disallowing review. (SLD)
Descriptors: Ability, Adaptive Testing, Comparative Testing, Computer Assisted Testing
Peer reviewedDassa, Clement; And Others – Alberta Journal of Educational Research, 1993
Presents a conceptual framework of educational diagnosis based on formative evaluation. Describes a computer-based diagnostic system that maximizes the coherence and effectiveness of obtained information by generating a structural diagnosis of the nature of student errors and a causal diagnosis related to teaching methods. Contains 64 references.…
Descriptors: Adaptive Testing, Computer Assisted Testing, Educational Diagnosis, Educational Innovation
Peer reviewedKingsbury, G. Gage; Houser, Ronald L. – Educational Measurement: Issues and Practice, 1993
The utility of item response theory (IRT) models in computerized adaptive tests is considered. Measurement questions that have been answered using IRT, and those that might be overlooked because of IRT, are reviewed. Areas in which fuller use of IRT could improve adaptive testing practices are identified. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Educational Assessment, Elementary Secondary Education
Peer reviewedWise, Steven L.; Plake, Barbara S. – Measurement and Evaluation in Counseling and Development, 1990
Discusses the unique advantages provided by computer-based (CB) testing. Describes the various forms of CB tests used in higher education and the variety of testing applications of computers in colleges and universities. Presents psychometric issues and concerns related to CB testing along with relevant research findings. (Author/PVV)
Descriptors: Adaptive Testing, Codes of Ethics, Computer Assisted Testing, Feedback
Cole, Jason C.; Lutkus, Anthony D. – Research in the Schools, 1997
A college administered the computer-adaptive ACCUPLACER (College Board, 1995) reading placement test to 399 entering students and its paper-and-pencil version, COMPANION, to 481 students. When the age of the two groups was held constant, no differences were found between the groups. (SLD)
Descriptors: Adaptive Testing, Age Differences, College Freshmen, Comparative Analysis
Peer reviewedShermis, Mark D.; Mzumara, Howard R.; Bublitz, Scott T. – Journal of Educational Computing Research, 2001
This study of undergraduates examined differences between computer adaptive testing (CAT) and self-adaptive testing (SAT), including feedback conditions and gender differences. Results of the Test Anxiety Inventory, Computer Anxiety Rating Scale, and a Student Attitude Questionnaire showed measurement efficiency is differentially affected by test…
Descriptors: Adaptive Testing, Computer Anxiety, Computer Assisted Testing, Gender Issues
Peer reviewedMerenda, Peter F. – Measurement and Evaluation in Counseling and Development, 1996
The Behavior Assessment System for Children (BASC), which was designed to assess both children's and adolescents' emotional disorders, personality constructs, and behavior problems, is discussed. Discusses the BASC technical manual, and psychometric properties such as norms, reliability, and validity. Offers favorable commentary for the BASC. (KW)
Descriptors: Adaptive Testing, Behavior Problems, Children, Emotional Disturbances
Meijer, Rob R. – Journal of Educational Measurement, 2004
Two new methods have been proposed to determine unexpected sum scores on sub-tests (testlets) both for paper-and-pencil tests and computer adaptive tests. A method based on a conservative bound using the hypergeometric distribution, denoted p, was compared with a method where the probability for each score combination was calculated using a…
Descriptors: Probability, Adaptive Testing, Item Response Theory, Scores
Li, Yuan H.; Schafer, William D. – Applied Psychological Measurement, 2005
Under a multidimensional item response theory (MIRT) computerized adaptive testing (CAT) testing scenario, a trait estimate (theta) in one dimension will provide clues for subsequently seeking a solution in other dimensions. This feature may enhance the efficiency of MIRT CAT's item selection and its scoring algorithms compared with its…
Descriptors: Adaptive Testing, Item Banks, Computation, Psychological Studies
Stock, Steven E.; Davies, Daniel K.; Wehmeyer, Michael L. – Journal of Special Education Technology, 2004
Assessment has always been an integral component of the educational process, but the importance to students of performing effectively on district and statewide tests has increased the visibility of testing and assessment for students with and without disabilities. There are several factors that limit the reliability of common testing formats for…
Descriptors: Computer Assisted Testing, Mental Retardation, Adaptive Testing, Evaluation Methods
Zwick, Rebecca; And Others – 1993
Simulated data were used to investigate the performance of modified versions of the Mantel-Haenszel and standardization methods of differential item functioning (DIF) analysis in computer-adaptive tests (CATs). Each "examinee" received 25 items out of a 75-item pool. A three-parameter logistic item response model was assumed, and…
Descriptors: Adaptive Testing, Computer Assisted Testing, Correlation, Error of Measurement
Slater, Sharon C.; Schaeffer, Gary A. – 1996
The General Computer Adaptive Test (CAT) of the Graduate Record Examinations (GRE) includes three operational sections that are separately timed and scored. A "no score" is reported if the examinee answers fewer than 80% of the items or if the examinee does not answer all of the items and leaves the section before time expires. The 80%…
Descriptors: Adaptive Testing, College Students, Computer Assisted Testing, Equal Education
Bizot, Elizabeth B.; Goldman, Steven H. – 1994
A study was conducted to evaluate the effects of choice of item response theory (IRT) model, parameter calibration group, starting ability estimate, and stopping criterion on the conversion of an 80-item vocabulary test to computer adaptive format. Three parameter calibration groups were tested: (1) a group of 1,000 high school seniors, (2) a…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics)
De Champlain, Andre; Gessaroli, Marc E. – 1996
The use of indices and statistics based on nonlinear factor analysis (NLFA) has become increasingly popular as a means of assessing the dimensionality of an item response matrix. Although the indices and statistics currently available to the practitioner have been shown to be useful and accurate in many testing situations, few studies have…
Descriptors: Adaptive Testing, Chi Square, Computer Assisted Testing, Factor Analysis
PDF pending restorationPlake, Barbara S.; And Others – 1994
In self-adapted testing (SAT), examinees select the difficulty level of items administered. This study investigated three variations of prior information provided when taking an SAT: (1) no information (examinees selected item difficulty levels without prior information); (2) view (examinees inspected a typical item from each difficulty level…
Descriptors: Adaptive Testing, College Students, Computer Assisted Testing, Difficulty Level

Direct link
