Publication Date
| In 2026 | 0 |
| Since 2025 | 74 |
| Since 2022 (last 5 years) | 509 |
| Since 2017 (last 10 years) | 1084 |
| Since 2007 (last 20 years) | 2603 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Researchers | 169 |
| Practitioners | 49 |
| Teachers | 32 |
| Administrators | 8 |
| Policymakers | 8 |
| Counselors | 4 |
| Students | 4 |
| Media Staff | 1 |
Location
| Turkey | 173 |
| Australia | 81 |
| Canada | 79 |
| China | 72 |
| United States | 56 |
| Taiwan | 44 |
| Germany | 43 |
| Japan | 41 |
| United Kingdom | 39 |
| Iran | 37 |
| Indonesia | 35 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 1 |
| Meets WWC Standards with or without Reservations | 1 |
| Does not meet standards | 1 |
Peer reviewedBertou, Patrick; Clasen, Robert E. – Journal of Experimental Education, 1971
Descriptors: Cross Cultural Studies, Item Analysis, Personality Measures, Spanish
Lachman, Roy; Mistler, Janet L. – J Exp Psychol, 1970
Descriptors: Factor Analysis, Item Analysis, Learning Processes, Recall (Psychology)
Proger, Barton B.; And Others – J Educ Res, 1970
Descriptors: Concept Formation, Concept Teaching, Instructional Materials, Item Analysis
Masters, Laraine – Amer J Psychol, 1970
Descriptors: Evaluation Criteria, Extinction (Psychology), Feedback, Item Analysis
Haynes, Jack R. – Amer Educ Res J, 1970
Descriptors: Cluster Grouping, Cognitive Ability, Cognitive Processes, Item Analysis
Craig, W. J. – J Clin Psychol, 1970
The Nursing Observation of Behavior Scales provides 55 five-point descriptive items for the objective assessment of patients in psychiatric wards. (CK)
Descriptors: Behavioral Science Research, Item Analysis, Nurses, Psychological Testing
Finn, R. H. – Educ Psychol Meas, 1970
Descriptors: Analysis of Variance, Classification, Correlation, Data Analysis
Ebel, Robert L. – Educ Psychol Meas, 1969
Descriptors: Item Analysis, Multiple Choice Tests, Objective Tests, Test Reliability
Peer reviewedGilmer, Jerry S.; Feldt, Leonard S. – Psychometrika, 1983
Estimating the reliability of measures derived from separate questions on essay tests or individual judges on a rater panel is considered. Cronbach's alpha is shown to underestimate reliability in these cases. Some alternative coefficients are presented. (JKS)
Descriptors: Essay Tests, Item Analysis, Measurement Techniques, Rating Scales
Peer reviewedvan den Wollenberg, Arnold L. – Psychometrika, 1982
Presently available test statistics for the Rasch model are shown to be insensitive to violations of the assumption of test unidimensionality. Two new statistics are presented. One is similar to available statistics, but with some improvements; the other addresses the problem of insensitivity to unidimensionality. (Author/JKS)
Descriptors: Item Analysis, Latent Trait Theory, Statistics, Test Reliability
Peer reviewedStreiner, David L.; Miller, Harold R. – Journal of Consulting and Clinical Psychology, 1979
A table is provided and described for prorating Minnesota Multiphasic Personality Inventory scales when the entire Form R has not been completed. Good concordance of profile types was found for 300 and 350 completed questions. Interpretations based on 200 items may be suspect. (Author)
Descriptors: Item Analysis, Patients, Personality Assessment, Personality Measures
Peer reviewedRaju, Nambury S. – Psychometrika, 1979
An important relationship is given for two generalizations of coefficient alpha: (1) Rajaratnam, Cronbach, and Gleser's generalizability formula for stratified-parallel tests, and (2) Raju's coefficient beta. (Author/CTM)
Descriptors: Item Analysis, Mathematical Formulas, Test Construction, Test Items
Peer reviewedBradshaw, Charles W., Jr. – Educational and Psychological Measurement, 1980
Two alternative procedures to Rogers' method of using control charts to display item statistics are discussed. The data itself determines limit and centerline values, thus permitting these values to be compared to any criterion difficulty level(s) deemed appropriate for a given set of test items. (Author/RL)
Descriptors: Flow Charts, Item Analysis, Mathematical Formulas, Quality Control
Peer reviewedCudeck, Robert – Journal of Educational Measurement, 1980
Methods for evaluating the consistency of responses to test items were compared. When a researcher is unwilling to make the assumptions of classical test theory, has only a small number of items, or is in a tailored testing context, Cliff's dominance indices may be useful. (Author/CTM)
Descriptors: Error Patterns, Item Analysis, Test Items, Test Reliability
Peer reviewedLueptow, Lloyd B.; And Others – Educational and Psychological Measurement, 1976
After taking tests in introductory college courses, students were asked to rate the quality of the items. Correlations between student ratings and item-test point biserial correlations revealed little or no relationship except for a subset of students who had performed well when taking the tests. (JKS)
Descriptors: College Students, Correlation, Course Evaluation, Item Analysis


