Publication Date
| In 2026 | 0 |
| Since 2025 | 74 |
| Since 2022 (last 5 years) | 509 |
| Since 2017 (last 10 years) | 1084 |
| Since 2007 (last 20 years) | 2603 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Researchers | 169 |
| Practitioners | 49 |
| Teachers | 32 |
| Administrators | 8 |
| Policymakers | 8 |
| Counselors | 4 |
| Students | 4 |
| Media Staff | 1 |
Location
| Turkey | 173 |
| Australia | 81 |
| Canada | 79 |
| China | 72 |
| United States | 56 |
| Taiwan | 44 |
| Germany | 43 |
| Japan | 41 |
| United Kingdom | 39 |
| Iran | 37 |
| Indonesia | 35 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 1 |
| Meets WWC Standards with or without Reservations | 1 |
| Does not meet standards | 1 |
Andrews, Frances M.; Deihl, Ned C. – J Res Music Educ, 1970
Descriptors: Concept Formation, Concept Teaching, Curriculum Design, Elementary School Students
Alumbaugh, Richard V.; and others – Educ Psychol Meas, 1969
Descriptors: Alcoholism, Behavior Rating Scales, Correlation, Discriminant Analysis
Elton, Charles F.; Shevel, Linda R. – ACT Res Rep, 1969
Descriptors: Academic Ability, Academic Achievement, Extracurricular Activities, Grouping (Instructional Purposes)
Peer reviewedVeale, James R.; Foreman, Dale I. – Journal of Educational Measurement, 1983
Statistical procedures for measuring heterogeneity of test item distractor distributions, or cultural variation, are presented. These procedures are based on the notion that examinees' responses to the incorrect options of a multiple-choice test provide more information concerning cultural bias than their correct responses. (Author/PN)
Descriptors: Ethnic Bias, Item Analysis, Mathematical Models, Multiple Choice Tests
Peer reviewedPiotrowski, Chris – Educational and Psychological Measurement, 1983
The present study compared the factor structures obtained when the same data were factor analyzed: (1) for the total sample (463 fifth grade students) and by sex for collapsed concepts, and (2) by individual concept for the total sample by sex, and for internal and external locus of control groups. (Author/PN)
Descriptors: Factor Analysis, Factor Structure, Individual Differences, Intermediate Grades
Peer reviewedGreen, Kathy E. – Educational and Psychological Measurement, 1983
This study was concerned with the reliability and validity of subjective judgments about five characteristics of multiple-choice test items from an introductory college-level astronomy test: (1) item difficulty, (2) language complexity, (3) content importance or relevance, (4) response set convergence, and (5) process complexity. (Author)
Descriptors: Achievement Tests, Astronomy, Difficulty Level, Evaluative Thinking
Peer reviewedHolden, Ronald R.; Jackson, Douglas N. – Journal of Consulting and Clinical Psychology, 1979
Presented a distinction between concepts of face validity and item subtlety. Trait categories were differentially accessible to individual judges. Higher criterion validities were associated with less subtle and more face-valid items. Results support a rational approach to test construction and emphasize the use of relevant test item content.…
Descriptors: College Students, Concept Formation, Evaluators, Factor Structure
Peer reviewedBurton, Nancy W. – Journal of Educational Measurement, 1980
Analysis of variance methods were used to investigate the reliability of scores on open ended items in the National Assessment of Educational Progress. The study was designed to determine their stability over seven different scorers and time of scoring during a three-month interval. (Author/CTM) Aspect of National Assessment (NAEP) dealt with in…
Descriptors: Career Development, Educational Assessment, Elementary Secondary Education, Item Analysis
Peer reviewedGross, Ruth; And Others – Journal of Consulting and Clinical Psychology, 1979
The Bem Sex-Role Inventory was more promising of independent measurement of constructs related to masculine-feminine identity but lacked purity. Low scorers on Bem masculinity were penalized because of items related to maturity. (Author/BEF)
Descriptors: Adults, Behavior Rating Scales, Comparative Analysis, Factor Structure
Peer reviewedBagley, Christopher; Mallick, Kanka – Educational Review, 1978
The purpose of this study is to check on the internal reliability of the Piers-Harris Self-Concept Scale with a British population aged 9 to 12 and, by means of a principal components analysis, to contruct a short form of the scale valid for both sexes. (Author)
Descriptors: Elementary Education, Elementary School Students, Factor Analysis, Item Analysis
Peer reviewedFrisbie, David A.; Brandenburg, Dale C. – Journal of Educational Measurement, 1979
Content-parallel questionnaire items in which response schemes varied in one of two ways--scale alternatives were all defined or only endpoints were defined, and alternatives were numbered or lettered--were investigated on a large sample of college freshmen. (Author/JKS)
Descriptors: Higher Education, Item Analysis, Questionnaires, Rating Scales
Peer reviewedGynther, Malcolm D.; Witt, Philip H. – Journal of Clinical Psychology, 1976
Compares highly educated professional blacks with blacks whose education and occupational skills are more representative of blacks-in-general to see whether the former have distinctive personality characteristics compatible with marginality. (Author/RK)
Descriptors: Black Teachers, Hypothesis Testing, Individual Characteristics, Item Analysis
Moore, Thomas L.; McLean, James E. – Measurement and Evaluation in Guidance, 1977
Based upon the item analysis, reliability, and factorial validity results of this study, there is reason to question the applicability of the Career Maturity Inventory (CMI) Attitude Scale to a college population. In particular, students with self-expressed and measured levels of vocational maturity significantly differ from the norming…
Descriptors: Career Choice, Career Development, College Students, Interest Inventories
Peer reviewedSax, Gilbert – Educational and Psychological Measurement, 1996
Using various Latin square and incomplete Latin square formats, the Fields test formats provide a novel way of presenting tests to students using machine scoreable answer sheets that can be item analyzed. Items can be constructed to help students acquire knowledge or to measure the attainment of course objectives. (SLD)
Descriptors: Answer Sheets, Item Analysis, Measures (Individuals), Scoring
Peer reviewedPrien, Borge – Studies in Educational Evaluation, 1989
Under certain conditions it may be possible to determine the difficulty of previously untested test items. Although no recipe can be provided, reflections on this topic are presented, drawing on concepts of item banking. A functional constructive method is suggested as having the most potential. (SLD)
Descriptors: Difficulty Level, Educational Assessment, Foreign Countries, Item Analysis


