Descriptor
Computer Programs | 59 |
Test Reliability | 59 |
Item Analysis | 18 |
Test Construction | 18 |
Test Validity | 18 |
Statistical Analysis | 17 |
Mathematical Models | 10 |
Testing | 9 |
Adaptive Testing | 8 |
Test Items | 8 |
Comparative Analysis | 7 |
More ▼ |
Source
Author
Cliff, Norman | 3 |
Christal, Raymond E. | 2 |
Cicchetti, Domenic V. | 2 |
Hambleton, Ronald K. | 2 |
Morris, John D. | 2 |
Reckase, Mark D. | 2 |
Adams, David R. | 1 |
Aiken, Lewis R. | 1 |
Aleamoni, Lawrence M. | 1 |
Algina, James | 1 |
Benson, Jeri | 1 |
More ▼ |
Publication Type
Reports - Research | 23 |
Journal Articles | 9 |
Guides - Non-Classroom | 2 |
Speeches/Meeting Papers | 2 |
Non-Print Media | 1 |
Reference Materials -… | 1 |
Reports - Descriptive | 1 |
Reports - General | 1 |
Education Level
Audience
Location
Georgia | 1 |
Laws, Policies, & Programs
Elementary and Secondary… | 1 |
Assessments and Surveys
General Aptitude Test Battery | 1 |
Stanford Binet Intelligence… | 1 |
What Works Clearinghouse Rating

McGary, Barbara A.; Burns, John A. – Journal of Educational and Psychological Measurement, 1974
Descriptors: Computer Programs, Scaling, Test Reliability

Erlich, Oded; Borich, Gary – Educational and Psychological Measurement, 1978
An overview of generalizability theory and a FORTRAN computer program for studying the generalizability of scores in a three facet, four factor design are presented. An illustrative example is presented. (Author/JKS)
Descriptors: Computer Programs, Test Interpretation, Test Reliability

Burns, Edward – Educational and Psychological Measurement, 1976
A computer program, written in Fortran IV, is described which assesses reliability by using analysis of variance. It produces a complete analysis of variance table in addition to reliability coefficients for unadjusted and adjusted data as well as the intraclass correlation for m subjects and n items. (Author)
Descriptors: Analysis of Variance, Computer Programs, Correlation, Test Reliability

Callender, John C.; Osburn, H. G. – Educational and Psychological Measurement, 1977
A FORTRAN program for maximizing and cross-validating split-half reliability coefficients is described. Externally computed arrays of item means and covariances are used as input for each of two samples. The user may select a number of subsets from the complete set of items for analysis in a single run. (Author/JKS)
Descriptors: Computer Programs, Item Analysis, Test Reliability, Test Validity

Cicchetti, Domenic V. – Educational and Psychological Measurement, 1976
A computer program which computes both the interjudge reliability of individual measurements and the extent to which the judges are biased in their ratings vis-a-vis each other is presented. The methods proposed are recommended on the basis of recent developments in statistical research. (Author/JKS)
Descriptors: Computer Programs, Individual Testing, Test Bias, Test Reliability

Schafer, William D. – Educational and Psychological Measurement, 1972
A listing of the program and a program description may be obtained by writing the author at the Department of Measurement and Statistics, College of Education, University of Maryland. (Author/MB)
Descriptors: Computer Programs, Program Descriptions, Test Reliability, Test Validity
Porter, D. Thomas – 1977
Critical to precise quantitative research is reliability estimation. Researchers have limited tools, however, to assess the reliability of evolving instruments. Consequently, cursory assessment is typical and in-depth evaluation is rare. This paper presents a rationale for and description of PIAS, a computerized instrument analysis system. PIAS…
Descriptors: Computer Programs, Item Analysis, Reliability, Statistical Analysis

Woodhouse, Brian; Jackson, Paul H. – Psychometrika, 1977
Finding and interpreting lower bounds for reliability coefficients for tests with non-homogeneous items has been a problem for psychometricians. A computer search procedure is developed for locating such a lower bound in a variety of settings. (Author/JKS)
Descriptors: Computer Programs, Mathematical Models, Measurement, Test Interpretation
Strasler, Gregg M.; Raeth, Peter G. – 1977
The study investigated the feasibility of adapting the coefficient k introduced by Cohen (1960) and elaborated by Swaminathan, Hambleton, and Algina (1974) to an internal consistency estimate for criterion referenced tests in single test administrations. The authors proposed the use of k as an internal consistency estimate by logically dividing…
Descriptors: Computer Programs, Criterion Referenced Tests, Multiple Choice Tests, Test Reliability
Noe, Michael J.; Algina, James – 1977
Single-administration procedures for estimating the coefficient of agreement, a reliability index for criterion referenced tests, were recently developed by Subkoviak. The procedures require a distributional assumption for errors of measurement and an estimate of each examinee's true score. A computer simulation of tests composed of items that…
Descriptors: Computer Programs, Criterion Referenced Tests, Simulation, Test Reliability

Mays, Robert – Educational and Psychological Measurement, 1978
A FORTRAN program for clustering variables using the alpha coefficient of reliability is described. For batch operation, a rule for stopping the agglomerative precedure is available. The conversational version of the program allows the user to intervene in the process in order to test the final solution for sensitivity to changes. (Author/JKS)
Descriptors: Cluster Analysis, Computer Programs, Factor Analysis, Online Systems

Cicchetti, Domenic V.; And Others – Educational and Psychological Measurement, 1978
This program computes specific category agreement levels for both nominally and ordinally scaled data. For ordinally scaled data, an option is available for collapsing the original scale to a smaller number of categories, with the goal of improving the level of interrater reliability for the rating scale. (Author)
Descriptors: Attitude Measures, Computer Programs, Measurement Techniques, Rating Scales

Martois, John S. – Educational and Psychological Measurement, 1973
Copies of this program may be obtained from the author at the University of Southern California, School of Pharmacy, University Park, Los Angeles 90007. (CB)
Descriptors: Comparative Analysis, Computer Programs, Input Output, Statistical Analysis

Carroll, C. Dennis – Educational and Psychological Measurement, 1976
A computer program for item evaluation, reliability estimation, and test scoring is described. The program contains a variable format procedure allowing flexible input of responses. Achievement tests and affective scales may be analyzed. (Author)
Descriptors: Achievement Tests, Affective Measures, Computer Programs, Item Analysis

Hambleton, Ronald K.; Cook, Linda L. – Journal of Educational Measurement, 1977
This article presents a non-mathematical introduction to latent trait test models and some of their features. Latent trait models are compared to classical test models. Two promising applications of latent trait models and available computer programs are discussed. (Author/JKS)
Descriptors: Computer Programs, Latent Trait Theory, Measurement, Models