Publication Date
| In 2026 | 0 |
| Since 2025 | 8 |
| Since 2022 (last 5 years) | 44 |
| Since 2017 (last 10 years) | 113 |
| Since 2007 (last 20 years) | 302 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Practitioners | 47 |
| Researchers | 34 |
| Teachers | 29 |
| Policymakers | 3 |
| Administrators | 2 |
Location
| Australia | 58 |
| Canada | 14 |
| Oregon | 11 |
| Netherlands | 10 |
| Missouri | 9 |
| Turkey | 9 |
| United Kingdom | 8 |
| United States | 8 |
| Massachusetts | 7 |
| Florida | 6 |
| Germany | 6 |
| More ▼ | |
Laws, Policies, & Programs
| Elementary and Secondary… | 16 |
| Individuals with Disabilities… | 8 |
| Elementary and Secondary… | 3 |
| Comprehensive Education… | 2 |
| Education Consolidation… | 1 |
| Family Educational Rights and… | 1 |
| Rehabilitation Act 1973 | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
McKinley, Robert L.; Reckase, Mark D. – 1983
A two-stage study was conducted to compare the ability estimates yielded by tailored testing procedures based on the one-parameter logistic (1PL) and three-parameter logistic (3PL) models. The first stage of the study employed real data, while the second stage employed simulated data. In the first stage, response data for 3,000 examinees were…
Descriptors: Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics), Item Banks
Holley, Freda M. – 1983
The Austin Independent School District (AISD) became interested in evaluation and testing skills of teachers when district and school averages on competency ratings in the area were among the lowest since 1979. School districts, as well as teacher preparation programs, should devote serious attention to the improvement of these teacher skills. In…
Descriptors: Evaluation Methods, Inservice Teacher Education, Item Banks, School Districts
Wood, Lewis J.; Gillis, Rod – 1987
This paper presents the results of a questionnaire sent to 211 Measurement Services Association members. Sixty-four centers responded. The main purpose of the questionnaire was to find out what hardware and software are used by testing centers throughout the country. Results indicate that 52 institutions use mainframe computers, 50 use…
Descriptors: Computer Assisted Testing, Computer Software, Computers, Item Banks
Hathaway, Walter E. – 1986
An ideal system of the National Assessment of Educational Progress (NAEP), from the local school district perspective, must follow several principles based on the 1985 Standards for Educational and Psychological Testing: (1) Testing must be viewed by teachers and students as worthwhile. (2) Test results must be presented in a timely and useful…
Descriptors: Educational Assessment, Educational Testing, Elementary Secondary Education, Item Banks
Doron, Rina – 1984
An alternative procedure is examined to compare test scores which does not require the utilization of computers and therefore, can be easily employed by classroom teachers. This technique is called the Average System: a simple way to score examinees using information obtained from a representative sample (or samples). Examinees are ranked on a…
Descriptors: Achievement Tests, Arithmetic, Comparative Analysis, Equated Scores
Martin, Randy – 1988
Reasons for administering tests fall into two categories--decision-making and promoting learning. The two bases of tests are learning objectives and the level of learning at which training is developed. Test development involves a number of steps. The best way to tie objectives to test items is through the use of a table of specifications, which…
Descriptors: Elementary Secondary Education, Item Analysis, Item Banks, Postsecondary Education
Tollefson, Nona; Tripp, Alice – 1983
This study compared the item difficulty and item discrimination of three multiple choice item formats. The multiple choice formats studied were: a complex alternative (none of the above) as the correct answer; a complex alternative as a foil, and the one-correct answer format. One hundred four graduate students were randomly assigned to complete…
Descriptors: Analysis of Variance, Difficulty Level, Graduate Students, Higher Education
Tindal, Gerald; Deno, Stanley L. – 1981
Evidence exists that reading achievement can be measured simply and validly by having students read aloud for one minute from vocabulary lists drawn from their basal reading series. Direct and frequent measurement of student performance using this procedure provides a means for continuously evaluating a student's instructional program. The present…
Descriptors: Elementary Education, Evaluation Methods, Informal Reading Inventories, Item Banks
McBride, James R. – 1979
In an adaptive test, the test administrator chooses test items sequentially during the test, in such a way as to adapt test difficulty to examinee ability as shown during testing. An effectively designed adaptive test can resolve the dilemma inherent in conventional test design. By tailoring tests to individuals, the adaptive test can…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Military Personnel
Forster, Fred; And Others – 1978
Research on the Rasch model of test and item analysis was applied to tests constructed from item banks for reading and mathematics with respect to five practical problems for scaling items and equating test forms. The questions were: (1) Does the Rasch model yield the same scale value regardless of the student sample? (2) How many students are…
Descriptors: Achievement Tests, Difficulty Level, Elementary Secondary Education, Equated Scores
Faggen, Jane – 1978
Formulas are presented for decision reliability and for classification validity for mastery/nonmastery decisions based on criterion referenced tests. Two item parameters are used: the probability of a master answering an item correctly, and the probability of a nonmaster answering an item incorrectly. The theory explores the relationships of…
Descriptors: Bayesian Statistics, Criterion Referenced Tests, Item Analysis, Item Banks
Instructional Objectives Exchange, Los Angeles, CA. – 1974
Objectives, with sample test items and explanations of answers are presented for instruction in judgment and logic in analyzing fallacies and weaknesses in arguments. This type of material is not usually taught in pre-college curricula, but has been geared for the secondary grades. Each fallacy is explained after the stated objective, and answers…
Descriptors: Behavioral Objectives, Cognitive Objectives, Cognitive Tests, Critical Thinking
Ree, Malcolm James – 1976
A method for developing statistically parallel tests based on the analysis of unique item variance was developed. A test population of 907 basic airmen trainees were required to estimate the angle at which an object in a photograph was viewed, selecting from eight possibilities. A FORTRAN program known as VARSEL was used to rank all the test items…
Descriptors: Comparative Analysis, Computer Programs, Enlisted Personnel, Item Analysis
Alvir, Howard P. – 1974
This document defines performance testing as the task of determining how well performance objectives are mastered by learners. Examples are provided for teachers who have to conduct performance testing with industrial standards. Stress is laid on communicating and exchanging materials with colleagues. A reporting form is included which shows how…
Descriptors: Computer Oriented Programs, Course Objectives, Criterion Referenced Tests, Curriculum Design
Lippey, Gerald – 1974
Computer-assisted test construction is a simple, inexpensive natural complement to computer-based test scoring. Parallel tests can be made for individually-paced instruction, and the computer can produce feedback to improve item quality. The adaptability of computer-assisted test construction means that its introduction will likely be accepted by…
Descriptors: Computer Assisted Instruction, Computer Oriented Programs, Computers, Individual Testing


