NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 9 results Save | Export
Peer reviewed Peer reviewed
Morris, John D. – Educational and Psychological Measurement, 1979
A computer program which creates and stores a cumulative item covariance matrix upon each administration of an instrument is described. Use of this program would facilitate keeping a constant log on reliability in situations in which the test is administered to different groups over a period of time. (Author/JKS)
Descriptors: Analysis of Covariance, Computer Programs, Correlation, Item Analysis
Peer reviewed Peer reviewed
Huynh, Huynh – Journal of Educational Statistics, 1979
In mastery testing, the raw agreement index and the kappa index may be estimated via one test administration when the test scores follow beta-binomial distributions. This paper reports formulae, tables, and a computer program which facilitate the computation of the standard errors of the estimates. (Author/CTM)
Descriptors: Computer Programs, Cutting Scores, Decision Making, Mastery Tests
Peer reviewed Peer reviewed
Wackerly, D. D.; Robinson, D. H. – Psychometrika, 1983
A statistical method for testing the agreement between a judge's assessment of an object or subject and a known standard is developed and shown to be superior to two other methods which appear in the literature. (Author/JKS)
Descriptors: Algorithms, Computer Programs, Judges, Measurement Techniques
Peer reviewed Peer reviewed
Cliff, Norman; And Others – Applied Psychological Measurement, 1979
Monte Carlo research with TAILOR, a program using implied orders as a basis for tailored testing, is reported. TAILOR typically required about half the available items to estimate, for each simulated examinee, the responses on the remainder. (Author/CTM)
Descriptors: Adaptive Testing, Computer Programs, Item Sampling, Nonparametric Statistics
Peer reviewed Peer reviewed
Hambleton, Ronald K.; And Others – Journal of Educational Measurement, 1983
A new method was developed to assist in the selection of a test length by utilizing computer simulation procedures and item response theory. A demonstration of the method presents results which address the influences of item pool heterogeneity matched to the objectives of interest and the method of item selection. (Author/PN)
Descriptors: Computer Programs, Criterion Referenced Tests, Item Banks, Latent Trait Theory
Peer reviewed Peer reviewed
Aiken, Lewis R. – Educational and Psychological Measurement, 1980
Procedures for computing content validity and consistency reliability coefficients and determining the statistical significance of these coefficients are described. Procedures employing the multinomial probability distribution for small samples and normal curve probability estimates for large samples, can be used where judgments are made on…
Descriptors: Computer Programs, Measurement Techniques, Probability, Questionnaires
Peer reviewed Peer reviewed
Rentz, R. Robert – Educational and Psychological Measurement, 1980
This paper elaborates on the work of Cardinet, and others, by clarifying some points regarding calculations, specifically with reference to existing computer programs, and by presenting illustrative examples of the calculation and interpretation of several generalizability coefficients from a complex six-facet (factor) design. (Author/RL)
Descriptors: Analysis of Variance, Computation, Computer Programs, Error of Measurement
Peer reviewed Peer reviewed
Huck, Schuyler W.; And Others – Educational and Psychological Measurement, 1981
Believing that examinee-by-item interaction should be conceptualized as true score variability rather than as a result of errors of measurement, Lu proposed a modification of Hoyt's analysis of variance reliability procedure. Via a computer simulation study, it is shown that Lu's approach does not separate interaction from error. (Author/RL)
Descriptors: Analysis of Variance, Comparative Analysis, Computer Programs, Difficulty Level
Tinari, Frank D. – Improving College and University Teaching, 1979
Computerized analysis of multiple choice test items is explained. Examples of item analysis applications in the introductory economics course are discussed with respect to three objectives: to evaluate learning; to improve test items; and to help improve classroom instruction. Problems, costs and benefits of the procedures are identified. (JMD)
Descriptors: College Instruction, Computer Programs, Discriminant Analysis, Economics Education