Publication Date
| In 2026 | 0 |
| Since 2025 | 8 |
| Since 2022 (last 5 years) | 44 |
| Since 2017 (last 10 years) | 113 |
| Since 2007 (last 20 years) | 302 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 47 |
| Researchers | 34 |
| Teachers | 29 |
| Policymakers | 3 |
| Administrators | 2 |
Location
| Australia | 58 |
| Canada | 14 |
| Oregon | 11 |
| Netherlands | 10 |
| Missouri | 9 |
| Turkey | 9 |
| United Kingdom | 8 |
| United States | 8 |
| Massachusetts | 7 |
| Florida | 6 |
| Germany | 6 |
| More ▼ | |
Laws, Policies, & Programs
| Elementary and Secondary… | 16 |
| Individuals with Disabilities… | 8 |
| Elementary and Secondary… | 3 |
| Comprehensive Education… | 2 |
| Education Consolidation… | 1 |
| Family Educational Rights and… | 1 |
| Rehabilitation Act 1973 | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Rudner, Lawrence M., Ed.; Schafer, William D., Ed. – Practical Assessment, Research and Evaluation, 2000
This document consists of articles 1 through 14 of volume 6 of "Practical Assessment, Research & Evaluation": (1) "Seven Myths about Literacy in the United States" (Jeff McQuillan); (2) "Implementing Performance Assessment in the Classroom" (Amy Brualdi); (3) "Some Evaluation Questions" (William Shadish); (4) "Item Banking" (Lawrence Rudner); (5)…
Descriptors: Educational Research, Elementary Secondary Education, Evaluation, Item Banks
Hertz, Norman R.; Chinn, Roberta N. – 2003
This study explored the effect of item exposure on two conventional examinations administered as computer-based tests. A principal hypothesis was that item exposure would have little or no effect on average difficulty of the items over the course of an administrative cycle. This hypothesis was tested by exploring conventional item statistics and…
Descriptors: Computer Assisted Testing, Item Banks, Item Response Theory, Licensing Examinations (Professions)
Peer reviewedArmstrong, R. D.; And Others – Applied Psychological Measurement, 1996
When the network-flow algorithm (NFA) and the average growth approximation algorithm (AGAA) were used for automated test assembly with American College Test and Armed Services Vocational Aptitude Battery item banks, results indicate that reasonable error in item parameters is not harmful for test assembly using NFA or AGAA. (SLD)
Descriptors: Algorithms, Aptitude Tests, College Entrance Examinations, Computer Assisted Testing
Peer reviewedVazquez-Abad, Jesus; LaFleur, Marc – Computers and Education, 1990
Reviews criticisms of the use of drill and practice programs in educational computing and describes potentials for its use in instruction. Topics discussed include guidelines for developing computer-based drill and practice; scripted training courseware; item format design; item bank design; and a performance-responsive algorithm for item…
Descriptors: Algorithms, Computer Assisted Instruction, Courseware, Drills (Practice)
Peer reviewedBock, R. Darrell; And Others – Journal of Educational Measurement, 1988
Differential drift of item location parameters over a 10-year period is demonstrated in data from the College Board Physics Achievement Test. Item content and secondary school curricula shifts are associated with drift. Statistical procedures for detecting item parameter drift in item pools for long-term testing programs are proposed. (TJH)
Descriptors: College Entrance Examinations, Item Analysis, Item Banks, Latent Trait Theory
Peer reviewedKoch, William R.; Dodd, Barbara G. – Educational and Psychological Measurement, 1995
Basic procedures for performing computerized adaptive testing based on the successive intervals (SI) Rasch model were evaluated. The SI model was applied to simulated and real attitude data sets. Item pools as small as 30 items performed well, and the model appeared practical for Likert-type data. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Item Response Theory
Peer reviewedArmstrong, Ronald D.; And Others – Psychometrika, 1992
A method is presented and illustrated for simultaneously generating multiple tests with similar characteristics from the item bank by using binary programing techniques. The parallel tests are created to match an existing seed test item for item and to match user-supplied taxonomic specifications. (SLD)
Descriptors: Algorithms, Arithmetic, Computer Assisted Testing, Equations (Mathematics)
Peer reviewedAdema, Jos J. – Applied Psychological Measurement, 1992
Two methods are proposed for the construction of weakly parallel tests based on a prespecified information function. A method is then described for selecting weakly parallel tests that are optimal with respect to the Maximin criterion. Numerical examples demonstrate the practicality of the tests. (SLD)
Descriptors: Equations (Mathematics), Heuristics, Item Banks, Item Response Theory
Peer reviewedvan der Linden, Wim J.; Luecht, Richard M. – Psychometrika, 1998
Derives a set of linear conditions of item-response functions that guarantees identical observed-score distributions on two test forms. The conditions can be added as constraints to a linear programming model for test assembly. An example illustrates the use of the model for an item pool from the Law School Admissions Test (LSAT). (SLD)
Descriptors: Equated Scores, Item Banks, Item Response Theory, Linear Programming
Peer reviewedMislevy, Robert J. – Applied Measurement in Education, 1998
How summarizing National Assessment of Educational Progress (NAEP) results in terms of a market basket of items would affect achievement-level reporting is discussed. A market basket is a specific set of items one may administer, scores on which constitute a reporting scale. Advantages and limitations of the market-basket approach are discussed.…
Descriptors: Academic Achievement, Achievement Tests, Elementary Secondary Education, Item Banks
Peer reviewedWang, Shudong; Wang, Tianyou – Applied Psychological Measurement, 2001
Evaluated the relative accuracy of the weighted likelihood estimate (WLE) of T. Warm (1989) compared to the maximum likelihood estimate (MLE), expected a posteriori estimate, and maximum a posteriori estimate. Results of the Monte Carlo study, which show the relative advantages of each approach, suggest that the test termination rule has more…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics)
Peer reviewedReise, Steven P. – Applied Psychological Measurement, 2001
This book contains a series of research articles about computerized adaptive testing (CAT) written for advanced psychometricians. The book is divided into sections on: (1) item selection and examinee scoring in CAT; (2) examples of CAT applications; (3) item banks; (4) determining model fit; and (5) using testlets in CAT. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Goodness of Fit, Item Banks
van Abswoude, Alexandra A. H.; Vermunt, Jeroen K.; Hemker, Bas T.; van der Ark, L. Andries – Applied Psychological Measurement, 2004
Mokken scale analysis (MSA) can be used to assess and build unidimensional scales from an item pool that is sensitive to multiple dimensions. These scales satisfy a set of scaling conditions, one of which follows from the model of monotone homogeneity. An important drawback of the MSA program is that the sequential item selection and scale…
Descriptors: Measures (Individuals), Item Analysis, Item Response Theory, Item Banks
Yin, Peng-Yeng; Chang, Kuang-Cheng; Hwang, Gwo-Jen; Hwang, Gwo-Haur; Chan, Ying – Educational Technology & Society, 2006
To accurately analyze the problems of students in learning, the composed test sheets must meet multiple assessment criteria, such as the ratio of relevant concepts to be evaluated, the average discrimination degree, difficulty degree and estimated testing time. Furthermore, to precisely evaluate the improvement of student's learning performance…
Descriptors: Student Evaluation, Performance Based Assessment, Test Construction, Computer Assisted Testing
Bergstrom, Betty A.; Stahl, John A. – 1992
This paper reports a method for assessing the adequacy of existing item banks for computer adaptive testing. The method takes into account content specifications, test length, and stopping rules, and can be used to determine if an existing item bank is adequate to administer a computer adaptive test efficiently across differing levels of examinee…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Evaluation Methods

Direct link
