NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 3,601 to 3,615 of 5,170 results Save | Export
Elton, Charles F.; Shevel, Linda R. – ACT Res Rep, 1969
Descriptors: Academic Ability, Academic Achievement, Extracurricular Activities, Grouping (Instructional Purposes)
Peer reviewed Peer reviewed
Veale, James R.; Foreman, Dale I. – Journal of Educational Measurement, 1983
Statistical procedures for measuring heterogeneity of test item distractor distributions, or cultural variation, are presented. These procedures are based on the notion that examinees' responses to the incorrect options of a multiple-choice test provide more information concerning cultural bias than their correct responses. (Author/PN)
Descriptors: Ethnic Bias, Item Analysis, Mathematical Models, Multiple Choice Tests
Peer reviewed Peer reviewed
Piotrowski, Chris – Educational and Psychological Measurement, 1983
The present study compared the factor structures obtained when the same data were factor analyzed: (1) for the total sample (463 fifth grade students) and by sex for collapsed concepts, and (2) by individual concept for the total sample by sex, and for internal and external locus of control groups. (Author/PN)
Descriptors: Factor Analysis, Factor Structure, Individual Differences, Intermediate Grades
Peer reviewed Peer reviewed
Green, Kathy E. – Educational and Psychological Measurement, 1983
This study was concerned with the reliability and validity of subjective judgments about five characteristics of multiple-choice test items from an introductory college-level astronomy test: (1) item difficulty, (2) language complexity, (3) content importance or relevance, (4) response set convergence, and (5) process complexity. (Author)
Descriptors: Achievement Tests, Astronomy, Difficulty Level, Evaluative Thinking
Peer reviewed Peer reviewed
Holden, Ronald R.; Jackson, Douglas N. – Journal of Consulting and Clinical Psychology, 1979
Presented a distinction between concepts of face validity and item subtlety. Trait categories were differentially accessible to individual judges. Higher criterion validities were associated with less subtle and more face-valid items. Results support a rational approach to test construction and emphasize the use of relevant test item content.…
Descriptors: College Students, Concept Formation, Evaluators, Factor Structure
Peer reviewed Peer reviewed
Burton, Nancy W. – Journal of Educational Measurement, 1980
Analysis of variance methods were used to investigate the reliability of scores on open ended items in the National Assessment of Educational Progress. The study was designed to determine their stability over seven different scorers and time of scoring during a three-month interval. (Author/CTM) Aspect of National Assessment (NAEP) dealt with in…
Descriptors: Career Development, Educational Assessment, Elementary Secondary Education, Item Analysis
Peer reviewed Peer reviewed
Gross, Ruth; And Others – Journal of Consulting and Clinical Psychology, 1979
The Bem Sex-Role Inventory was more promising of independent measurement of constructs related to masculine-feminine identity but lacked purity. Low scorers on Bem masculinity were penalized because of items related to maturity. (Author/BEF)
Descriptors: Adults, Behavior Rating Scales, Comparative Analysis, Factor Structure
Peer reviewed Peer reviewed
Bagley, Christopher; Mallick, Kanka – Educational Review, 1978
The purpose of this study is to check on the internal reliability of the Piers-Harris Self-Concept Scale with a British population aged 9 to 12 and, by means of a principal components analysis, to contruct a short form of the scale valid for both sexes. (Author)
Descriptors: Elementary Education, Elementary School Students, Factor Analysis, Item Analysis
Peer reviewed Peer reviewed
Frisbie, David A.; Brandenburg, Dale C. – Journal of Educational Measurement, 1979
Content-parallel questionnaire items in which response schemes varied in one of two ways--scale alternatives were all defined or only endpoints were defined, and alternatives were numbered or lettered--were investigated on a large sample of college freshmen. (Author/JKS)
Descriptors: Higher Education, Item Analysis, Questionnaires, Rating Scales
Peer reviewed Peer reviewed
Gynther, Malcolm D.; Witt, Philip H. – Journal of Clinical Psychology, 1976
Compares highly educated professional blacks with blacks whose education and occupational skills are more representative of blacks-in-general to see whether the former have distinctive personality characteristics compatible with marginality. (Author/RK)
Descriptors: Black Teachers, Hypothesis Testing, Individual Characteristics, Item Analysis
Moore, Thomas L.; McLean, James E. – Measurement and Evaluation in Guidance, 1977
Based upon the item analysis, reliability, and factorial validity results of this study, there is reason to question the applicability of the Career Maturity Inventory (CMI) Attitude Scale to a college population. In particular, students with self-expressed and measured levels of vocational maturity significantly differ from the norming…
Descriptors: Career Choice, Career Development, College Students, Interest Inventories
Peer reviewed Peer reviewed
Sax, Gilbert – Educational and Psychological Measurement, 1996
Using various Latin square and incomplete Latin square formats, the Fields test formats provide a novel way of presenting tests to students using machine scoreable answer sheets that can be item analyzed. Items can be constructed to help students acquire knowledge or to measure the attainment of course objectives. (SLD)
Descriptors: Answer Sheets, Item Analysis, Measures (Individuals), Scoring
Peer reviewed Peer reviewed
Prien, Borge – Studies in Educational Evaluation, 1989
Under certain conditions it may be possible to determine the difficulty of previously untested test items. Although no recipe can be provided, reflections on this topic are presented, drawing on concepts of item banking. A functional constructive method is suggested as having the most potential. (SLD)
Descriptors: Difficulty Level, Educational Assessment, Foreign Countries, Item Analysis
Peer reviewed Peer reviewed
Green, Bert F.; And Others – Journal of Educational Measurement, 1989
A method of analyzing test item responses is advocated to examine differential item functioning through distractor choices of those answering an item incorrectly. The analysis uses log-linear models of a three-way contingency table, and is illustrated in an analysis of the verbal portion of the Scholastic Aptitude Test. (TJH)
Descriptors: College Entrance Examinations, Distractors (Tests), Evaluation Methods, High School Students
Peer reviewed Peer reviewed
Henning, Grant – Language Testing, 1988
Violations of item unidimensionality on language tests produced distorted estimates of person ability, and violations of person unidimensionality produced distorted estimates of item difficulty. The Bejar Method was sensitive to such distortions. (Author)
Descriptors: Construct Validity, Content Validity, Difficulty Level, Item Analysis
Pages: 1  |  ...  |  237  |  238  |  239  |  240  |  241  |  242  |  243  |  244  |  245  |  ...  |  345