NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 21 results Save | Export
Peer reviewed Peer reviewed
Wainer, Howard – Journal of Educational Measurement, 1986
An example demonstrates and explains that summary statistics commonly used to measure test quality can be seriously misleading and that summary statistics for the whole test are not sufficient for judging the quality of the test. (Author/LMO)
Descriptors: Correlation, Item Analysis, Statistical Bias, Statistical Studies
Peer reviewed Peer reviewed
Cudeck, Robert – Journal of Educational Measurement, 1980
Methods for evaluating the consistency of responses to test items were compared. When a researcher is unwilling to make the assumptions of classical test theory, has only a small number of items, or is in a tailored testing context, Cliff's dominance indices may be useful. (Author/CTM)
Descriptors: Error Patterns, Item Analysis, Test Items, Test Reliability
Peer reviewed Peer reviewed
Woodson, M. I. Chas. E. – Journal of Educational Measurement, 1974
Descriptors: Criterion Referenced Tests, Item Analysis, Test Construction, Test Reliability
Peer reviewed Peer reviewed
Crehan, Kevin D. – Journal of Educational Measurement, 1974
Various item selection techniques are compared on criterion-referenced reliability and validity. Techniques compared include three nominal criterion-referenced methods, a traditional point biserial selection, teacher selection, and random selection. (Author)
Descriptors: Comparative Analysis, Criterion Referenced Tests, Item Analysis, Item Banks
Peer reviewed Peer reviewed
Whitely, Susan E.; Dawis, Rene V. – Journal of Educational Measurement, 1974
Descriptors: Error of Measurement, Item Analysis, Matrices, Measurement Techniques
Peer reviewed Peer reviewed
Haladyna, Thomas Michael – Journal of Educational Measurement, 1974
Classical test construction and analysis procedures are applicable and appropriate for use with criterion referenced tests when samples of both mastery and nonmastery examinees are employed. (Author/BB)
Descriptors: Criterion Referenced Tests, Item Analysis, Mastery Tests, Test Construction
Peer reviewed Peer reviewed
Woodson, M. I. Charles E. – Journal of Educational Measurement, 1974
The basis for selection of the calibration sample determines the kind of scale which will be developed. A random sample from a population of individuals leads to a norm-referenced scale, and a sample representative of abilities of a range of characteristics leads to a criterion-referenced scale. (Author/BB)
Descriptors: Criterion Referenced Tests, Discriminant Analysis, Item Analysis, Test Construction
Peer reviewed Peer reviewed
Knapp, Thomas R. – Journal of Educational Measurement, 1977
The test-retest reliability of one single dichotomous item is discussed. Various indices are derived for summarizing the stability of a dichotomy, based on the concept of Platonic true scores. Both open-ended and multiple choice items are considered. (Author/JKS)
Descriptors: Correlation, Elementary Education, Item Analysis, Response Style (Tests)
Peer reviewed Peer reviewed
Terwilliger, James S.; Lele, Kaustubh – Journal of Educational Measurement, 1979
Different indices for the internal consistency, reproducibility, or homogeneity of a test are based upon highly similar conceptual frameworks. Illustrations are presented to demonstrate how the maximum and minimum values of KR20 are influenced by test difficulty and the shape of the distribution of test scores. (Author/CTM)
Descriptors: Difficulty Level, Item Analysis, Mathematical Formulas, Statistical Analysis
Peer reviewed Peer reviewed
Board, Cynthia; Whitney, Douglas R. – Journal of Educational Measurement, 1972
For the principles studied here, poor item-writing practices serve to obscure (or attentuate) differences between good and poor students. (Authors)
Descriptors: College Students, Item Analysis, Multiple Choice Tests, Test Construction
Peer reviewed Peer reviewed
Beuchert, A. Kent; Mendoza, Jorge L. – Journal of Educational Measurement, 1979
Ten item discrimination indices, across a variety of item analysis situations, were compared, based on the validities of tests constructed by using each of the indices to select 40 items from a 100-item pool. Item score data were generated by a computer program and included a simulation of guessing. (Author/CTM)
Descriptors: Item Analysis, Simulation, Statistical Analysis, Test Construction
Peer reviewed Peer reviewed
Millman, Jason; Popham, W. James – Journal of Educational Measurement, 1974
The use of the regression equation derived from the Anglo-American sample to predict grades of Mexican-American students resulted in overprediction. An examination of the standardized regression weights revealed a significant difference in the weight given to the Scholastic Aptitude Test Mathematics Score. (Author/BB)
Descriptors: Criterion Referenced Tests, Item Analysis, Predictive Validity, Scores
Peer reviewed Peer reviewed
Whitely, Susan E. – Journal of Educational Measurement, 1977
A debate concerning specific issues and the general usefulness of the Rasch latent trait test model is continued. Methods of estimation, necessary sample size, and the applicability of the model are discussed. (JKS)
Descriptors: Error of Measurement, Item Analysis, Mathematical Models, Measurement
Peer reviewed Peer reviewed
Rudner, Lawrence M. – Journal of Educational Measurement, 1983
Nine indices for assessing the accuracy of an individual's test score were evaluated using simulated item responses to a commercial and a classroom test. The indices appear capable of identifying relatively high proportions of examinees with spurious total scores. (Author/PN)
Descriptors: Correlation, Item Analysis, Latent Trait Theory, Measurement Techniques
Peer reviewed Peer reviewed
Wright, Benjamin D. – Journal of Educational Measurement, 1977
Statements made in a previous article of this journal concerning the Rasch latent trait test model are questioned. Methods of estimation, necessary sample sizes, several formuli, and the general usefulness of the Rasch model are discussed. (JKS)
Descriptors: Computers, Error of Measurement, Item Analysis, Mathematical Models
Previous Page | Next Page ยป
Pages: 1  |  2