NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 58 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Davison, Mark L.; Davenport, Ernest C., Jr.; Chang, Yu-Feng; Vue, Kory; Su, Shiyang – Journal of Educational Measurement, 2015
Criterion-related profile analysis (CPA) can be used to assess whether subscores of a test or test battery account for more criterion variance than does a single total score. Application of CPA to subscore evaluation is described, compared to alternative procedures, and illustrated using SAT data. Considerations other than validity and reliability…
Descriptors: Criterion Referenced Tests, Scores, Affirmative Action, Prediction
Peer reviewed Peer reviewed
Direct linkDirect link
Puhan, Gautam – Journal of Educational Measurement, 2010
In this study I compared results of chained linear, Tucker, and Levine-observed score equatings under conditions where the new and old forms samples were similar in ability and also when they were different in ability. The length of the anchor test was also varied to examine its effect on the three different equating methods. The three equating…
Descriptors: Testing, Equated Scores, Comparative Analysis, Causal Models
Peer reviewed Peer reviewed
Direct linkDirect link
Moses, Tim; Holland, Paul W. – Journal of Educational Measurement, 2010
In this study, eight statistical strategies were evaluated for selecting the parameterizations of loglinear models for smoothing the bivariate test score distributions used in nonequivalent groups with anchor test (NEAT) equating. Four of the strategies were based on significance tests of chi-square statistics (Likelihood Ratio, Pearson,…
Descriptors: Equated Scores, Models, Statistical Distributions, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Sooyeon; Livingston, Samuel A. – Journal of Educational Measurement, 2010
Score equating based on small samples of examinees is often inaccurate for the examinee populations. We conducted a series of resampling studies to investigate the accuracy of five methods of equating in a common-item design. The methods were chained equipercentile equating of smoothed distributions, chained linear equating, chained mean equating,…
Descriptors: Equated Scores, Test Items, Item Sampling, Item Response Theory
Peer reviewed Peer reviewed
Swaminathan, Hariharan; And Others – Journal of Educational Measurement, 1974
It is proposed that the reliability of criterion-referenced test scores be defined in terms of the consistency of the decision-making process across repeated administrations of the test. (Author/RC)
Descriptors: Criterion Referenced Tests, Decision Making, Statistical Analysis, Test Reliability
Peer reviewed Peer reviewed
Brennan, Robert L.; Kane, Michael T. – Journal of Educational Measurement, 1977
An index for the dependability of mastery tests is described. Assumptions necessary for the index and the mathematical development of the index are provided. (Author/JKS)
Descriptors: Criterion Referenced Tests, Mastery Tests, Mathematical Models, Test Reliability
Peer reviewed Peer reviewed
Feldt, Leonard S. – Journal of Educational Measurement, 1996
A relatively simple method is developed to obtain confidence intervals for a student's proportion of domain mastery in criterion-referenced or mastery measurement situations. The method uses the binomial distribution as a model for the student's scores under hypothetically repeated assessments, and it makes use of widely available "F"…
Descriptors: Criterion Referenced Tests, Equations (Mathematics), Models, Scores
Peer reviewed Peer reviewed
Huynh, Huynh – Journal of Educational Measurement, 1976
Within the beta-binomial Bayesian framework, procedures are described for the evaluation of the kappa index of reliability on the basis of one administration of a domain-referenced test. Major factors affecting this index include cutoff score, test score variability and test length. Empirical data which substantiate some theoretical trends deduced…
Descriptors: Criterion Referenced Tests, Decision Making, Mathematical Models, Probability
Peer reviewed Peer reviewed
Subkoviak, Michael J. – Journal of Educational Measurement, 1976
A number of different reliability coefficients have recently been proposed for tests used to differentiate between groups such as masters and nonmasters. One promising index is the proportion of students in a class that are consistently assigned to the same mastery group across two testings. The present paper proposes a single test administration…
Descriptors: Criterion Referenced Tests, Mastery Tests, Mathematical Models, Probability
Peer reviewed Peer reviewed
Woodson, M. I. Chas. E. – Journal of Educational Measurement, 1974
Descriptors: Criterion Referenced Tests, Item Analysis, Test Construction, Test Reliability
Peer reviewed Peer reviewed
Livingston, Samuel A. – Journal of Educational Measurement, 1973
Article commented on a study by Harris, who presented formulas for the variance of errors of estimation (of a true score from an observed score) and the variance of errors of prediction (of an observed score from an observed score on a parallel test). (Author/RK)
Descriptors: Criterion Referenced Tests, Measurement, Norm Referenced Tests, Test Reliability
Peer reviewed Peer reviewed
Cox, Richard C.; Sterrett, Barbara G. – Journal of Educational Measurement, 1970
A method for obtaining criterion-references information from standardized tests is proposed. (TA)
Descriptors: Course Objectives, Criterion Referenced Tests, Standardized Tests, Test Interpretation
Peer reviewed Peer reviewed
Algina, James; Noe, Michael J. – Journal of Educational Measurement, 1978
A computer simulation study was conducted to investigate Subkoviak's index of reliability for criterion-referenced tests, called the coefficient of agreement. Results indicate that the index can be adequately estimated. (JKS)
Descriptors: Criterion Referenced Tests, Mastery Tests, Measurement, Test Reliability
Peer reviewed Peer reviewed
Subkoviak, Michael J. – Journal of Educational Measurement, 1978
Four different methods for estimating the proportions of testees properly classified as having mastered or not mastered test content are examined, using data from the Scholastic Aptitude Test. All four methods prove reasonably accurate and all show some bias under certain conditions. (JKS)
Descriptors: Bias, Criterion Referenced Tests, Mastery Tests, Measurement
Peer reviewed Peer reviewed
Ebel, Robert L. – Journal of Educational Measurement, 1973
Author attempted to call attention to some serious problems with respect to the general goals of education, and to suggest that these problems are left untouched by most of the newer developments in the specification of educational objectives. (Author/RK)
Descriptors: Behavioral Objectives, Criterion Referenced Tests, Educational Objectives, Evaluation
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4