NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 13 results Save | Export
Peer reviewed Peer reviewed
Brennan, Robert L.; Kane, Michael T. – Journal of Educational Measurement, 1977
An index for the dependability of mastery tests is described. Assumptions necessary for the index and the mathematical development of the index are provided. (Author/JKS)
Descriptors: Criterion Referenced Tests, Mastery Tests, Mathematical Models, Test Reliability
Peer reviewed Peer reviewed
Gardner, P. L. – Journal of Educational Measurement, 1970
Descriptors: Error of Measurement, Mathematical Models, Statistical Analysis, Test Reliability
Peer reviewed Peer reviewed
Huynh, Huynh – Journal of Educational Measurement, 1976
Within the beta-binomial Bayesian framework, procedures are described for the evaluation of the kappa index of reliability on the basis of one administration of a domain-referenced test. Major factors affecting this index include cutoff score, test score variability and test length. Empirical data which substantiate some theoretical trends deduced…
Descriptors: Criterion Referenced Tests, Decision Making, Mathematical Models, Probability
Peer reviewed Peer reviewed
Subkoviak, Michael J. – Journal of Educational Measurement, 1976
A number of different reliability coefficients have recently been proposed for tests used to differentiate between groups such as masters and nonmasters. One promising index is the proportion of students in a class that are consistently assigned to the same mastery group across two testings. The present paper proposes a single test administration…
Descriptors: Criterion Referenced Tests, Mastery Tests, Mathematical Models, Probability
Peer reviewed Peer reviewed
Peng, Chao-Ying, J.; Subkoviak, Michael J. – Journal of Educational Measurement, 1980
Huynh (1976) suggested a method of approximating the reliability coefficient of a mastery test. The present study examines the accuracy of Huynh's approximation and also describes a computationally simpler approximation which appears to be generally more accurate than the former. (Author/RL)
Descriptors: Error of Measurement, Mastery Tests, Mathematical Models, Statistical Analysis
Peer reviewed Peer reviewed
Whitely, Susan E. – Journal of Educational Measurement, 1977
A debate concerning specific issues and the general usefulness of the Rasch latent trait test model is continued. Methods of estimation, necessary sample size, and the applicability of the model are discussed. (JKS)
Descriptors: Error of Measurement, Item Analysis, Mathematical Models, Measurement
Peer reviewed Peer reviewed
Budescu, David – Journal of Educational Measurement, 1985
An important determinant of equating process efficiency is the correlation between the anchor test and components of each form. Use of some monotonic function of this correlation as a measure of equating efficiency is suggested. A model relating anchor test length and test reliability to this measure of efficiency is presented. (Author/DWH)
Descriptors: Correlation, Equated Scores, Mathematical Models, Standardized Tests
Peer reviewed Peer reviewed
Huynh, Huynh; Saunders, Joseph C. – Journal of Educational Measurement, 1980
Single administration (beta-binomial) estimates for the raw agreement index p and the corrected-for-chance kappa index in mastery testing are compared with those based on two test administrations in terms of estimation bias and sampling variability. Bias is about 2.5 percent for p and 10 percent for kappa. (Author/RL)
Descriptors: Comparative Analysis, Error of Measurement, Mastery Tests, Mathematical Models
Peer reviewed Peer reviewed
Wright, Benjamin D. – Journal of Educational Measurement, 1977
Statements made in a previous article of this journal concerning the Rasch latent trait test model are questioned. Methods of estimation, necessary sample sizes, several formuli, and the general usefulness of the Rasch model are discussed. (JKS)
Descriptors: Computers, Error of Measurement, Item Analysis, Mathematical Models
Peer reviewed Peer reviewed
Kolen, Michael J.; Whitney, Douglas R. – Journal of Educational Measurement, 1982
The adequacy of equipercentile, linear, one-parameter (Rasch), and three-parameter logistic item-response theory procedures for equating 12 forms of five tests of general educational development were compared. Results indicated the equating method adequacy depends on a variety of factors such as test characteristics, equating design, and sample…
Descriptors: Achievement Tests, Comparative Analysis, Equated Scores, Equivalency Tests
Peer reviewed Peer reviewed
Marsh, Herbert W. – Journal of Educational Measurement, 1993
Structural equation models of the same construct collected on different occasions are evaluated in 2 studies involving the evaluation of 157 college instructors over 8 years and data for over 2,200 high school students over 4 years for the Youth in Transition Study. Results challenge overreliance on simplex models. (SLD)
Descriptors: College Faculty, Comparative Analysis, High School Students, High Schools
Peer reviewed Peer reviewed
Sireci, Stephen G.; And Others – Journal of Educational Measurement, 1991
Calculating the reliability of a testlet-based test is demonstrated using data from 1,812 males and 2,216 females taking the Scholastic Aptitude Test verbal section and 3,866 examinees taking another reading test. Traditional reliabilities calculated on reading comprehension tests constructed of four testlets provided substantial overestimates.…
Descriptors: College Entrance Examinations, Equations (Mathematics), Estimation (Mathematics), High School Students
Peer reviewed Peer reviewed
Muthen, Bengt O. – Journal of Educational Measurement, 1991
Data from the Second International Mathematics Study for 3,724 eighth graders are used to demonstrate the potential of multilevel factor analysis (MFA) methodology. Issues related to between-class and within-class decomposition of achievement variance and the change in this decomposition over the course of the eighth grade are revealed through…
Descriptors: Academic Achievement, Analysis of Variance, Classes (Groups of Students), Equations (Mathematics)