Publication Date
| In 2026 | 1 |
| Since 2025 | 27 |
| Since 2022 (last 5 years) | 133 |
| Since 2017 (last 10 years) | 265 |
| Since 2007 (last 20 years) | 494 |
Descriptor
| Computer Assisted Testing | 1112 |
| Test Construction | 1112 |
| Test Items | 386 |
| Adaptive Testing | 275 |
| Foreign Countries | 234 |
| Test Validity | 196 |
| Item Banks | 194 |
| Higher Education | 165 |
| Evaluation Methods | 147 |
| Test Format | 146 |
| Test Reliability | 142 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Audience
| Researchers | 49 |
| Practitioners | 36 |
| Teachers | 21 |
| Administrators | 5 |
| Policymakers | 5 |
| Counselors | 2 |
| Media Staff | 1 |
| Support Staff | 1 |
Location
| Australia | 18 |
| Canada | 16 |
| Taiwan | 13 |
| Turkey | 13 |
| Spain | 12 |
| United Kingdom | 12 |
| Germany | 11 |
| Indonesia | 10 |
| Oregon | 10 |
| China | 9 |
| United States | 9 |
| More ▼ | |
Laws, Policies, & Programs
| Individuals with Disabilities… | 10 |
| No Child Left Behind Act 2001 | 5 |
| Elementary and Secondary… | 1 |
| Elementary and Secondary… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
| Does not meet standards | 1 |
Schnipke, Deborah L.; Reese, Lynda M. – 1999
Two-stage and multistage test designs provide a way of roughly adapting item difficulty to test taker ability. This study incorporated testlets (bundles of items) into two-stage and multistage designs, and compared the precision of the ability estimates derived from these designs with those derived from a standard computerized adaptive test (CAT)…
Descriptors: Adaptive Testing, College Entrance Examinations, Computer Assisted Testing, Law Schools
Thompson, Tony D.; Davey, Tim – 2000
This paper applies specific information item selection using a method developed by T. Davey and M. Fan (2000) to a multiple-choice passage-based reading test that is being developed for computer administration. Data used to calibrate the multidimensional item parameters for the simulation study consisted of item responses from randomly equivalent…
Descriptors: Adaptive Testing, Computer Assisted Testing, Reading Tests, Selection
Habick, Timothy – 1999
With the advent of computer-based testing (CBT) and the need to increase the number of items available in computer adaptive test pools, the idea of item variants was conceived. An item variant can be defined as an item with content based on an existing item to a greater or lesser degree. Item variants were first proposed as a way to enhance test…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Test Construction
Pellegrino, James W. – 2001
The recent National Research Council (NRC) report, "Knowing What Students Know: The Science and Design of Educational Assessment," suggests that it is time to rethink the basic assumptions underlying assessment of students and the use of measurement data to enhance teaching and learning. This essay draws on arguments developed in the NRC report to…
Descriptors: Computer Assisted Testing, Educational Technology, Elementary Secondary Education, Models
Optimal Stratification of Item Pools in a-Stratified Computerized Adaptive Testing. Research Report.
van der Linden, Wim J. – 2000
A method based on 0-1 linear programming (LP) is presented to stratify an item pool optimally for use in "alpha"-stratified adaptive testing. Because the 0-1 LP model belongs to the subclass of models with a network-flow structure, efficient solutions are possible. The method is applied to a previous item pool from the computerized…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Linear Programming
Peer reviewedRock, Daniel L.; Nolen, Patricia A. – Perceptual and Motor Skills, 1982
Children aged 7 to 14 years were administered a computerized version of the Raven's Coloured Progressive Matrices test. Computer and traditional version performance was found to be similar in terms of total mean score, correlation with the WISC-R, Raven's subscale intercorrelations, and Raven's total mean score composition. (Author/RD)
Descriptors: Computer Assisted Testing, Correlation, Elementary Secondary Education, Psychoeducational Clinics
Peer reviewedMcKinley, Robert L.; Reckase, Mark D. – AEDS Journal, 1980
Describes tailored testing (in which a computer selects appropriate items from an item bank while an examinee is taking a test) and shows it to be superior to paper-and-pencil tests in such areas as reliability, security, and appropriateness of items. (IRT)
Descriptors: Adaptive Testing, Computer Assisted Testing, Higher Education, Program Evaluation
Peer reviewedDavey, Tim; And Others – Journal of Educational Measurement, 1997
The development and scoring of a recently introduced computer-based writing skills test is described. The test asks the examinee to edit a writing passage presented on a computer screen. Scoring difficulties are addressed through the combined use of option weighting and the sequential probability ratio test. (SLD)
Descriptors: Computer Assisted Testing, Educational Innovation, Probability, Scoring
Peer reviewedShermis, Mark D.; Averitt, Jason – Educational Measurement: Issues and Practice, 2002
Outlines a series of security steps that might be taken by researchers or organizations that are contemplating Web-based tests and performance assessments. Focuses on what can be done to avoid the loss, compromising, or modification of data collected by or stored through the Internet. (SLD)
Descriptors: Computer Assisted Testing, Data Collection, Performance Based Assessment, Test Construction
Peer reviewedGoldberg, Amie L.; Pedulla, Joseph J. – Educational and Psychological Measurement, 2002
Studied the relationship between test mode (paper and pencil or computerized with and without editorial control) and computer familiarity for 222 undergraduates. Results emphasize the importance of evaluating time constraints when converting exams from paper to computer delivery. (SLD)
Descriptors: Computer Assisted Testing, Computer Literacy, Higher Education, Test Construction
Peer reviewedLaatsch, Linda; Choca, James – Psychological Assessment, 1994
The authors propose using cluster analysis to develop a branching logic that would allow the adaptive administration of psychological instruments. The proposed methodology is described in detail and used to develop an adaptive version of the Halstead Category Test from archival data. (SLD)
Descriptors: Adaptive Testing, Cluster Analysis, Computer Assisted Testing, Psychological Testing
Peer reviewedKyllonen, Patrick C. – Intelligence, 1991
The experience of developing a set of comprehensive aptitude batteries for computer administration for the Air Force Human Resources Laboratory's Learning Abilities Measurement Program resulted in the formulation of nine principles for creation of a computerized test battery. These principles are discussed in the context of research on…
Descriptors: Aptitude Tests, Computer Assisted Testing, Intelligence Tests, Learning Processes
Peer reviewedChalhoub-Deville, Micheline; Deville, Craig – Annual Review of Applied Linguistics, 1999
Provides a broad overview of computerized testing issues with an emphasis on computer-adaptive testing (CAT). A survey of the potential benefits and drawbacks of CAT are given, the process of CAT development is described; and some L2 instruments developed to assess various language skills are summarized. (Author/VWL)
Descriptors: Computer Assisted Testing, Language Skills, Language Tests, Second Language Learning
Ashton, Helen S.; Beevers, Cliff E.; Korabinski, Athol A.; Youngson, Martin A. – British Journal of Educational Technology, 2006
In a mathematical examination on paper, partial credit is normally awarded for an answer that is not correct, but, nevertheless, contains some of the correct working. Assessment on computer normally marks an incorrect answer wrong and awards no marks. This can lead to discrepancies between marks awarded for the same examination given in the two…
Descriptors: Secondary School Mathematics, Computer Assisted Testing, Mathematics Tests, Grading
Ritter, Lois A., Ed.; Sue, Valerie M., Ed. – New Directions for Evaluation, 2007
This chapter provides an overview of sampling methods that are appropriate for conducting online surveys. The authors review some of the basic concepts relevant to online survey sampling, present some probability and nonprobability techniques for selecting a sample, and briefly discuss sample size determination and nonresponse bias. Although some…
Descriptors: Sampling, Probability, Evaluation Methods, Computer Assisted Testing

Direct link
