Publication Date
| In 2026 | 7 |
| Since 2025 | 690 |
| Since 2022 (last 5 years) | 3191 |
| Since 2017 (last 10 years) | 7432 |
| Since 2007 (last 20 years) | 15070 |
Descriptor
| Test Reliability | 15055 |
| Test Validity | 10290 |
| Reliability | 9763 |
| Foreign Countries | 7150 |
| Test Construction | 4828 |
| Validity | 4192 |
| Measures (Individuals) | 3880 |
| Factor Analysis | 3826 |
| Psychometrics | 3532 |
| Interrater Reliability | 3126 |
| Correlation | 3040 |
| More ▼ | |
Source
Author
Publication Type
Education Level
Audience
| Researchers | 709 |
| Practitioners | 451 |
| Teachers | 208 |
| Administrators | 122 |
| Policymakers | 66 |
| Counselors | 42 |
| Students | 38 |
| Parents | 11 |
| Community | 7 |
| Support Staff | 6 |
| Media Staff | 5 |
| More ▼ | |
Location
| Turkey | 1329 |
| Australia | 436 |
| Canada | 379 |
| China | 368 |
| United States | 271 |
| United Kingdom | 256 |
| Indonesia | 253 |
| Taiwan | 234 |
| Netherlands | 224 |
| Spain | 218 |
| California | 215 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 8 |
| Meets WWC Standards with or without Reservations | 9 |
| Does not meet standards | 6 |
Silverstein, A. B.; Fisher, Gary – Psychol Rep, 1970
The present findings provide some further support for two of Fiske's hypotheses: the adequacy of a test is a direct function of the structuring of the items and their substantive homogeneity." (Author)
Descriptors: Hypothesis Testing, Test Construction, Test Reliability, Test Validity
Peer reviewedBench, John; Parker, Anne – Journal of Child Psychology and Psychiatry, 1970
Descriptors: Infant Behavior, Infants, Predictive Measurement, Test Reliability
Valencia, Atilano A. – Bus Educ Forum, 1969
Reliability data based on empirical research are presented as an adjunct to national norms already available through the publisher. (CH)
Descriptors: Business Education, National Competency Tests, Test Reliability, Typewriting
Lefkowitz, David M. – J Counseling Psychol, 1970
Comparisons made in errors of classifications, percentage of overlapping, correlation of identical scales scored by both scoring procedures, intercorrelations of scales, and the ranking of scale scores within each subject showed that the two scoring systems produced different interest scores. (Author)
Descriptors: Comparative Analysis, Evaluation, Interest Inventories, Reliability
Alderman, Richard B.; Banfield, Terry J. – Res Quart AAHPER, 1969
Descriptors: Measurement Instruments, Measurement Techniques, Muscular Strength, Test Reliability
Peer reviewedMishra, Shitala P.; Brown, Kenneth H. – Journal of Clinical Psychology, 1983
Compared the Wechsler Adult Intelligence Scale (WAIS) and the WAIS-Revised in a sample of 88 adults. Indices of obtained correlation coefficients suggested a high degree of similarity between the two scales. Results also showed that WAIS IQs were significantly higher than corresponding IQs on the WAIS-R. (WAS)
Descriptors: Adults, Comparative Testing, Intelligence Tests, Scores
Peer reviewedCuenot, Randall G.; Darbes, Alex – Educational and Psychological Measurement, 1982
Thirty-one clinical psychologists scored Comprehension, Similarities, and Vocabulary subtest items common to the Wechsler Intelligence Scale for Children (WISC) and the Wechsler Intelligence Scale for Children, Revised (WISC-R). The results on interrater scoring agreement suggest that the scoring of these subtests may be less subjective than…
Descriptors: Clinical Psychology, Intelligence Tests, Psychologists, Scoring
Peer reviewedShapiro, Alexander – Psychometrika, 1982
The extent to which one can reduce the rank of a symmetric matrix by only changing its diagonal entries is discussed. Extension of this work to minimum trace factor analysis is presented. (Author/JKS)
Descriptors: Data Analysis, Factor Analysis, Mathematical Models, Matrices
Peer reviewedGreen, Samuel B. – Educational and Psychological Measurement, 1981
The proportion of agreement, G, and kappa indexes are shown to differ in how they correct for chance agreements between two observers. On the basis of the findings, it is suggested that no single agreement index is appropriate for all sets of data. (Author/BW)
Descriptors: Comparative Analysis, Measurement Techniques, Test Reliability, Testing Problems
Decker, Robert L. – Personnel Administrator, 1981
The sole objective of the employment interview should be to obtain and evaluate factual and verifiable information. The greater the discrepancy between the tasks of the job and the experience of the interviewee, the more critical will be the influence of the intuitive judgment of the interviewer. (Author/MLF)
Descriptors: Employment Interviews, Employment Practices, Employment Qualifications, Reliability
Peer reviewedCardinet, Jean; And Others – Journal of Educational Measurement, 1981
Since fixed and random facets may exist in objects of study as well as in conditions of observation, various modifications of the generalizability theory estimation formulas are required for different types of measurement designs. Various design modifications are proposed to improve reliability by reducing error variance. (Author/BW)
Descriptors: Analysis of Variance, Reliability, Research Design, Statistical Analysis
Peer reviewedBergan, John R. – Journal of Educational Statistics, 1980
The use of a quasi-equiprobability model in the measurement of observer agreement involving dichotomous coding categories is described. A measure of agreement is presented which gives the probability of agreement under the assumption that observation pairs reflecting disagreement will be equally probable. (Author/JKS)
Descriptors: Judges, Mathematical Models, Observation, Probability
Peer reviewedGorsuch, Richard L. – Educational and Psychological Measurement, 1980
Kaiser and Michael reported a formula for factor scores giving an internal consistency reliability and its square root, the domain validity. Using this formula is inappropriate if variables are included which have trival weights rather than salient weights for the factor for which the score is being computed. (Author/RL)
Descriptors: Factor Analysis, Factor Structure, Scoring Formulas, Test Reliability
Peer reviewedNorris, Marylee; And Others – Journal of Speech and Hearing Disorders, 1980
The study reported differences in agreement among four experienced listeners who analyzed the articulation skills of 97 four- and five-year-old children. Place and manner of articulation revealed differences of agreement, whereas voicing and syllabic function contributed little to agreement or disagreement. (Author)
Descriptors: Articulation Impairments, Informal Assessment, Listening, Preschool Education
Peer reviewedFitzgerald, Gisela G. – Journal of Reading, 1981
Research indicates that three samples may not give a good indication of a workbook's narrative readability level. (MKM)
Descriptors: Elementary Secondary Education, Readability Formulas, Reading Research, Reliability


