Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 1 |
Descriptor
Achievement Tests | 31 |
Item Analysis | 31 |
Testing Problems | 31 |
Test Validity | 17 |
Test Construction | 14 |
Test Items | 13 |
Elementary Secondary Education | 9 |
Standardized Tests | 8 |
Test Bias | 8 |
Latent Trait Theory | 7 |
Elementary Education | 6 |
More ▼ |
Source
Journal of Educational… | 2 |
Educational Measurement:… | 1 |
Evaluation in Education:… | 1 |
Journal of Research and… | 1 |
Nursing Outlook | 1 |
ProQuest LLC | 1 |
School Science and Mathematics | 1 |
Author
Green, Donald Ross | 2 |
Hoover, H. D. | 2 |
Jolly, S. Jean | 2 |
Cahen, Leonard S. | 1 |
Choppin, Bruce | 1 |
Diamond, Esther E. | 1 |
Doolittle, Allen E. | 1 |
Doron, Rina | 1 |
Draper, John F. | 1 |
Durost, Walter N. | 1 |
Ferguson, Richard L. | 1 |
More ▼ |
Publication Type
Education Level
Elementary Secondary Education | 1 |
Audience
Researchers | 5 |
Practitioners | 1 |
Location
New Hampshire | 1 |
Laws, Policies, & Programs
Elementary and Secondary… | 1 |
Emergency School Aid Act 1972 | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Kozloff, Allison Burstein – ProQuest LLC, 2009
Comprehensive academic achievement tests are routinely used by school psychologists in psycho-educational assessment batteries to identify learning disabled students. A variety of assessment measures are used across age groups to determine if a discrepancy exists between academic achievement and intellectual functioning; however, among the most…
Descriptors: Intelligence, Educational Assessment, Academic Achievement, Achievement Tests

Loyd, Brenda H.; Hoover, H. D. – Journal of Educational Measurement, 1980
Three levels of a mathematics computation test were equated using the Rasch model. Sixth, seventh, and eighth graders were administered different levels of the test. Lack of consistency among equatings suggested that the Rasch model did not produce a satisfactory vertical equating of this computation test. (Author/RD)
Descriptors: Ability Grouping, Achievement Tests, Elementary Education, Equated Scores
Doolittle, Allen E. – 1985
Differential item performance (DIP) is discussed as a concept that does not necessarily imply item bias or unfairness to subgroups of examinees. With curriculum-based achievement tests, DIP is presented as a valid reflection of group differences in requisite skills and instruction. Using data from a national testing of the ACT Assessment, this…
Descriptors: Achievement Tests, High Schools, Item Analysis, Mathematics Achievement
Choppin, Bruce; And Others – 1982
A detailed description of five latent structure models of achievement measurement is presented. The first project paper, by David L. McArthur, analyzes the history of mental testing to show how conventional item analysis procedures were developed, and how dissatisfaction with them has led to fragmentation. The range of distinct conceptual and…
Descriptors: Academic Achievement, Achievement Tests, Comparative Analysis, Data Analysis

Garigliano, Leonard J. – School Science and Mathematics, 1975
Responding to publicity about declining computation scores on standardized tests, the author conducted a study comparing October with May testing and timed with untimed tests. He concluded that students today are able to compute but do so more slowly than earlier students and earn higher scores on applications and concepts. (SD)
Descriptors: Achievement Tests, Basic Skills, Elementary School Mathematics, Elementary Secondary Education
Myers, Charles T. – 1978
The viewpoint is expressed that adding to test reliability by either selecting a more homogeneous set of items, restricting the range of item difficulty as closely as possible to the most efficient level, or increasing the number of items will not add to test validity and that there is considerable danger that efforts to increase reliability may…
Descriptors: Achievement Tests, Item Analysis, Multiple Choice Tests, Test Construction
Green, Donald Ross – 1976
During the past few years the problem of bias in testing has become an increasingly important issue. In most research, bias refers to the fair use of tests and has thus been defined in terms of an outside criterion measure of the performance being predicted by the test. Recently however, there has been growing interest in assessing bias when such…
Descriptors: Achievement Tests, Item Analysis, Mathematical Models, Minority Groups
Scheuneman, Janice – 1975
In order to screen out items which may be biased against some ethnic group prior to the final selection of items in test construction, a statistical technique for assessing item bias was developed. Based on a theoretical formulation of R. B. Darlington, the method compares the performance of individuals who belong to different ethnic groups, but…
Descriptors: Achievement Tests, Content Analysis, Cultural Influences, Ethnic Groups

Kolstad, Rosemarie K.; And Others – Journal of Research and Development in Education, 1983
A study compared college students' performance on complex multiple-choice tests with scores on multiple true-false clusters. Researchers concluded that the multiple-choice tests did not accurately measure students' knowledge and that cueing and guessing led to grade inflation. (PP)
Descriptors: Achievement Tests, Difficulty Level, Guessing (Tests), Higher Education
Jolly, S. Jean; And Others – 1985
Scores from the Stanford Achievement Tests administered to 50,000 students in Palm Beach County, Florida, were studied in order to determine whether the speeded nature of the reading comprehension subtest was related to inconsistencies in the score profiles. Specifically, the probable effect of random guessing was examined. Reading scores were…
Descriptors: Achievement Tests, Elementary Secondary Education, Guessing (Tests), Item Analysis
Wood, Robert – Evaluation in Education: International Progress, 1977
The author surveys literature and practice, primarily in Great Britain and the United States, about multiple-choice testing, comments on criticisms, and defends the state of the art. Varous item types, item writing, test instructions and scoring formulas, item analysis, and test construction are discussed. An extensive bibliography is appended.…
Descriptors: Achievement Tests, Item Analysis, Multiple Choice Tests, Scoring Formulas
Floden, Robert E.; And Others – 1978
The authors argue that personnel who select standardized achievement tests have been led to believe that the major achievement test batteries differ very little in terms of the topics they test; but that the content covered by these major tests is different, and that such differences have consequences for instructional content. To test this…
Descriptors: Achievement Tests, Curriculum, Elementary School Mathematics, Grade 4
Green, Donald Ross; Draper, John F. – 1972
This paper considers the question of bias in group administered academic achievement tests, bias which is inherent in the instruments themselves. A body of data on the test of performance of three disadvantaged minority groups--northern, urban black; southern, rural black; and, southwestern, Mexican-Americans--as tryout samples in contrast to…
Descriptors: Achievement Tests, Bias, Comparative Testing, Educational Testing

Plake, Barbara S.; Hoover, H. D. – Journal of Educational Measurement, 1979
An experiment investigated the extent to which the results of out-of-level testing may be biased because the child given an out of level test may have had a significantly different curriculum than the children given in-level tests. Item analysis data suggested this was unlikely. (CTM)
Descriptors: Achievement Tests, Elementary Education, Elementary School Curriculum, Grade Equivalent Scores
Diamond, Esther E. – 1984
The problem of measuring growth across the target grade and, typically, the two adjacent grades, concerns most developers of standardized, norm-referenced achievement tests, particularly at the item selection stage. Opinion is divided on whether to retain or drop items that do not get easier from grade to grade. The controversy has focused on…
Descriptors: Achievement Gains, Achievement Tests, Age Differences, Difficulty Level