Publication Date
| In 2026 | 0 |
| Since 2025 | 29 |
| Since 2022 (last 5 years) | 168 |
| Since 2017 (last 10 years) | 329 |
| Since 2007 (last 20 years) | 613 |
Descriptor
| Computer Assisted Testing | 1057 |
| Test Items | 1057 |
| Adaptive Testing | 448 |
| Test Construction | 385 |
| Item Response Theory | 255 |
| Item Banks | 223 |
| Foreign Countries | 194 |
| Difficulty Level | 166 |
| Test Format | 160 |
| Item Analysis | 158 |
| Simulation | 142 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Audience
| Researchers | 24 |
| Practitioners | 20 |
| Teachers | 13 |
| Students | 2 |
| Administrators | 1 |
Location
| Germany | 17 |
| Australia | 13 |
| Japan | 12 |
| Taiwan | 12 |
| Turkey | 12 |
| United Kingdom | 12 |
| China | 11 |
| Oregon | 10 |
| Canada | 9 |
| Netherlands | 9 |
| United States | 9 |
| More ▼ | |
Laws, Policies, & Programs
| Individuals with Disabilities… | 8 |
| Americans with Disabilities… | 1 |
| Head Start | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Peer reviewedWise, Steven L.; Finney, Sara J.; Enders, Craig K.; Freeman, Sharon A.; Severance, Donald D. – Applied Measurement in Education, 1999
Examined whether providing item review on a computerized adaptive test could be used by examinees to inflate their scores. Two studies involving 139 undergraduates suggest that examinees are not highly proficient at discriminating item difficulty. A simulation study showed the usefulness of a strategy identified by G. Kingsbury (1996) as a way to…
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Higher Education
Peer reviewedMooney, G. A.; Bligh, J. G.; Leinster, S. J. – Medical Teacher, 1998
Presents a system of classification for describing computer-based assessment techniques based on the level of action and educational activity they offer. Illustrates 10 computer-based assessment techniques and discusses their educational value. Contains 14 references. (Author)
Descriptors: Adaptive Testing, Classification, Computer Assisted Testing, Foreign Countries
Peer reviewedVispoel, Walter P. – Journal of Educational Measurement, 1998
Studied effects of administration mode [computer adaptive test (CAT) versus self-adaptive test (SAT)], item-by-item answer feedback, and test anxiety on results from computerized vocabulary tests taken by 293 college students. CATs were more reliable than SATs, and administration time was less when feedback was provided. (SLD)
Descriptors: Adaptive Testing, College Students, Computer Assisted Testing, Feedback
Peer reviewedReise, Steven P. – Applied Psychological Measurement, 2001
This book contains a series of research articles about computerized adaptive testing (CAT) written for advanced psychometricians. The book is divided into sections on: (1) item selection and examinee scoring in CAT; (2) examples of CAT applications; (3) item banks; (4) determining model fit; and (5) using testlets in CAT. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Goodness of Fit, Item Banks
Wise, Steven L.; Kong, Xiaojing – Applied Measurement in Education, 2005
When low-stakes assessments are administered, the degree to which examinees give their best effort is often unclear, complicating the validity and interpretation of the resulting test scores. This study introduces a new method, based on item response time, for measuring examinee test-taking effort on computer-based test items. This measure, termed…
Descriptors: Psychometrics, Validity, Reaction Time, Test Items
A Closer Look at Using Judgments of Item Difficulty to Change Answers on Computerized Adaptive Tests
Vispoel, Walter P.; Clough, Sara J.; Bleiler, Timothy – Journal of Educational Measurement, 2005
Recent studies have shown that restricting review and answer change opportunities on computerized adaptive tests (CATs) to items within successive blocks reduces time spent in review, satisfies most examinees' desires for review, and controls against distortion in proficiency estimates resulting from intentional incorrect answering of items prior…
Descriptors: Mathematics, Item Analysis, Adaptive Testing, Computer Assisted Testing
Ferdous, Abdullah A.; Plake, Barbara S.; Chang, Shu-Ren – Educational Assessment, 2007
The purpose of this study was to examine the effect of pretest items on response time in an operational, fixed-length, time-limited computerized adaptive test (CAT). These pretest items are embedded within the CAT, but unlike the operational items, are not tailored to the examinee's ability level. If examinees with higher ability levels need less…
Descriptors: Pretests Posttests, Reaction Time, Computer Assisted Testing, Test Items
Eggen, Theo J. H. M.; Verschoor, Angela J. – Applied Psychological Measurement, 2006
Computerized adaptive tests (CATs) are individualized tests that, from a measurement point of view, are optimal for each individual, possibly under some practical conditions. In the present study, it is shown that maximum information item selection in CATs using an item bank that is calibrated with the one- or the two-parameter logistic model…
Descriptors: Adaptive Testing, Difficulty Level, Test Items, Item Response Theory
Alonzo, Julie; Tindal, Gerald – Behavioral Research and Teaching, 2009
In this technical report, we describe the development and piloting of a series of mathematics progress monitoring measures intended for use with students in kindergarten. These measures, available as part of easyCBM[TM], an online progress monitoring assessment system, were developed in 2008 and administered to approximately 2800 students from…
Descriptors: Kindergarten, General Education, Response to Intervention, Access to Education
Chang, Wen-Chih; Yang, Hsuan-Che; Shih, Timothy K.; Chao, Louis R. – International Journal of Distance Education Technologies, 2009
E-learning provides a convenient and efficient way for learning. Formative assessment not only guides student in instruction and learning, diagnose skill or knowledge gaps, but also measures progress and evaluation. An efficient and convenient e-learning formative assessment system is the key character for e-learning. However, most e-learning…
Descriptors: Electronic Learning, Student Evaluation, Formative Evaluation, Educational Objectives
Dorans, Neil J.; Schmitt, Alicia P. – 1991
Differential item functioning (DIF) assessment attempts to identify items or item types for which subpopulations of examinees exhibit performance differentials that are not consistent with the performance differentials typically seen for those subpopulations on collections of items that purport to measure a common construct. DIF assessment…
Descriptors: Computer Assisted Testing, Constructed Response, Educational Assessment, Item Bias
van der Linden, Wim J. – 1995
Dichotomous item response theory (IRT) models can be viewed as families of stochastically ordered distributions of responses to test items. This paper explores several properties of such distributions. The focus is on the conditions under which stochastic order in families of conditional distributions is transferred to their inverse distributions,…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Foreign Countries
Bergstrom, Betty A.; Stahl, John A. – 1992
This paper reports a method for assessing the adequacy of existing item banks for computer adaptive testing. The method takes into account content specifications, test length, and stopping rules, and can be used to determine if an existing item bank is adequate to administer a computer adaptive test efficiently across differing levels of examinee…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Evaluation Methods
Clariana, Roy B. – 1990
Research has shown that multiple-choice questions formed by transforming or paraphrasing a reading passage provide a measure of student comprehension. It is argued that similar transformation and paraphrasing of lesson questions is an appropriate way to form parallel multiple-choice items to be used as a posttest measure of student comprehension.…
Descriptors: Comprehension, Computer Assisted Testing, Difficulty Level, Measurement Techniques
Jelden, D. L. – 1987
A study of 696 undergraduates at the University of Northern Colorado was undertaken to determine the effects of computerized unit test item feedback on final examination scores. The study, which employed the PHOENIX computer managed instruction system, included students at all undergraduate levels enrolled in an Oceanography course. To determine…
Descriptors: College Students, Computer Assisted Instruction, Computer Assisted Testing, Feedback

Direct link
