Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 6 |
Descriptor
Criterion Referenced Tests | 64 |
Item Analysis | 64 |
Test Reliability | 64 |
Test Validity | 47 |
Test Construction | 46 |
Norm Referenced Tests | 18 |
Test Items | 17 |
Statistical Analysis | 12 |
Test Interpretation | 12 |
Achievement Tests | 9 |
Mathematical Models | 9 |
More ▼ |
Source
Author
Hambleton, Ronald K. | 3 |
Brennan, Robert L. | 2 |
Haladyna, Tom | 2 |
Popham, W. James | 2 |
Roid, Gale | 2 |
Woodson, M. I. Charles E. | 2 |
Baker, Eva L. | 1 |
Bashaw, W. L. | 1 |
Bernknopf, Stanley | 1 |
Blalock, Lydia | 1 |
Brekke, Milo L. | 1 |
More ▼ |
Publication Type
Education Level
Elementary Secondary Education | 2 |
Higher Education | 2 |
Postsecondary Education | 2 |
Elementary Education | 1 |
High Schools | 1 |
Audience
Practitioners | 3 |
Researchers | 2 |
Students | 1 |
Teachers | 1 |
Laws, Policies, & Programs
Individuals with Disabilities… | 1 |
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
Comprehensive Tests of Basic… | 1 |
General Educational… | 1 |
Kaufman Test of Educational… | 1 |
SRA Achievement Series | 1 |
What Works Clearinghouse Rating
Frame, Laura B.; Vidrine, Stephanie M.; Hinojosa, Ryan – Journal of Psychoeducational Assessment, 2016
The Kaufman Test of Educational Achievement, Third Edition (KTEA-3) is a revised and updated comprehensive academic achievement test (Kaufman & Kaufman, 2014). Authored by Drs. Alan and Nadeen Kaufman and published by Pearson, the KTEA-3 remains an individual achievement test normed for individuals of ages 4 through 25 years, or for those in…
Descriptors: Achievement Tests, Elementary Secondary Education, Test Validity, Test Reliability
Cunningham, James W.; Mesmer, Heidi Anne – Elementary School Journal, 2014
Common Core Reading Standard 10 not only prescribes the difficulty of texts students should become able to read, but also the difficulty diet of texts schools should ask their students to read across the school year. The use of quantitative text-assessment tools in the implementation of this standard warrants an examination into the validity of…
Descriptors: Difficulty Level, Academic Standards, State Standards, Statistical Analysis
Spooren, Pieter; Brockx, Bert; Mortelmans, Dimitri – Review of Educational Research, 2013
This article provides an extensive overview of the recent literature on student evaluation of teaching (SET) in higher education. The review is based on the SET meta-validation model, drawing upon research reports published in peer-reviewed journals since 2000. Through the lens of validity, we consider both the more traditional research themes in…
Descriptors: Student Evaluation of Teacher Performance, Teacher Evaluation, Test Validity, Educational Research
Setzer, J. Carl – GED Testing Service, 2009
The GED[R] English as a Second Language (GED ESL) Test was designed to serve as an adjunct to the GED test battery when an examinee takes either the Spanish- or French-language version of the tests. The GED ESL Test is a criterion-referenced, multiple-choice instrument that assesses the functional, English reading skills of adults whose first…
Descriptors: Language Tests, High School Equivalency Programs, Psychometrics, Reading Skills
Hall, John D.; Howerton, D. Lynn; Jones, Craig H. – Research in the Schools, 2008
The No Child Left Behind Act and the accountability movement in public education caused many states to develop criterion-referenced academic achievement tests. Scores from these tests are often used to make high stakes decisions. Even so, these tests typically do not receive independent psychometric scrutiny. We evaluated the 2005 Arkansas…
Descriptors: Criterion Referenced Tests, Achievement Tests, High Stakes Tests, Public Education

Woodson, M. I. Chas. E. – Journal of Educational Measurement, 1974
Descriptors: Criterion Referenced Tests, Item Analysis, Test Construction, Test Reliability

Crehan, Kevin D. – Journal of Educational Measurement, 1974
Various item selection techniques are compared on criterion-referenced reliability and validity. Techniques compared include three nominal criterion-referenced methods, a traditional point biserial selection, teacher selection, and random selection. (Author)
Descriptors: Comparative Analysis, Criterion Referenced Tests, Item Analysis, Item Banks

Haladyna, Thomas Michael – Journal of Educational Measurement, 1974
Classical test construction and analysis procedures are applicable and appropriate for use with criterion referenced tests when samples of both mastery and nonmastery examinees are employed. (Author/BB)
Descriptors: Criterion Referenced Tests, Item Analysis, Mastery Tests, Test Construction

Woodson, M. I. Charles E. – Journal of Educational Measurement, 1974
The basis for selection of the calibration sample determines the kind of scale which will be developed. A random sample from a population of individuals leads to a norm-referenced scale, and a sample representative of abilities of a range of characteristics leads to a criterion-referenced scale. (Author/BB)
Descriptors: Criterion Referenced Tests, Discriminant Analysis, Item Analysis, Test Construction
Ivens, Stephen H. – 1972
A discussion of criterion-referenced measures is presented. Two characteristics define the criterion-referenced measure: the presence of a performance criterion, and test items keyed to a set of behavioral objectives. The performance criterion, in an educational setting, is usually a relative standard of performance. There are two ways of…
Descriptors: Behavioral Objectives, Criterion Referenced Tests, Item Analysis, Performance Criteria
The Effect of Violating the Assumption of Equal Item Means in Estimating the Livingston Coefficient.

Lovett, Hubert T. – Educational and Psychological Measurement, 1978
The validity of five methods of estimating the reliability of criterion-referenced tests was evaluated across nine conditions of variability among item means. The results were analyzed by analysis of variance, the Newman-Keuls test, and a nonparametric procedure. There was a tendency for all of the methods to be conservative. (Author/JKS)
Descriptors: Analysis of Variance, Criterion Referenced Tests, Item Analysis, Nonparametric Statistics
Woodson, M. I. Charles E.
The item (difficulty and discrimination) and test (reliability and validity) statistics in classical test theory are highly dependent upon the calibration sample of individuals used. The estimates of item and test parameters in classical test theory is valid within a range of interest along the characteristic measured. Generally, this range of…
Descriptors: Criterion Referenced Tests, Item Analysis, Research Reports, Statistics
Hymel, Glenn M.; Gaines, W. George – 1978
The evaluation model for mastery testing as proposed by Emrick (1971) represents an empirical approach to determining the most appropriate mastery criterion score that should be established in those testing situations which entail decision making relative to the continued progression or possible recycling of students in a given learning sequence.…
Descriptors: Cost Effectiveness, Criterion Referenced Tests, Cutting Scores, Item Analysis

Millman, Jason; Popham, W. James – Journal of Educational Measurement, 1974
The use of the regression equation derived from the Anglo-American sample to predict grades of Mexican-American students resulted in overprediction. An examination of the standardized regression weights revealed a significant difference in the weight given to the Scholastic Aptitude Test Mathematics Score. (Author/BB)
Descriptors: Criterion Referenced Tests, Item Analysis, Predictive Validity, Scores
Hsu, Tse-Chi – 1971
A good criterion-referenced test item is defined as the one which allows the individual to answer correctly if he masters the criterion behavior represented by the item and answer incorrectly if he actually does not master it. Therefore, a good discriminating item for criterion-referenced tests is the one which has a larger proportion of correct…
Descriptors: Behavioral Objectives, Correlation, Criterion Referenced Tests, Individualized Instruction