NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 6,406 to 6,420 of 9,530 results Save | Export
Peer reviewed Peer reviewed
Whitely, Susan E. – Educational and Psychological Measurement, 1977
The verbal analogy item as a measure of intelligence is investigated. Using latent partition analysis, this study attempts to identify a semantic structure of relationships that individuals use to comprehend completed analogies. The implications for test construction and test validity are discussed. (Author/JKS)
Descriptors: Cognitive Processes, Higher Education, Intelligence, Intelligence Tests
Peer reviewed Peer reviewed
Scherich, Henry H.; Hanna, Gerald S. – Educational and Psychological Measurement, 1977
The reading comprehension items for a revision of the Nelson Reading Skills Test were administered to several hundred fourth and sixth-grade pupils in order to determine the passage dependency of each item. The passage dependency index was used to locate weak items. (Author/JKS)
Descriptors: Context Clues, Elementary School Students, Intermediate Grades, Item Analysis
Peer reviewed Peer reviewed
Tamir, Pinchas – Journal of Experimental Education, 1978
This study examines the claim that cognitive preferences in science are no more than expressions of levels of cognitive operation as described by Bloom's Taxonomy. 667 twelfth grade students of chemistry and 989 twelfth grade biology students took a cognitive preference and an achievement test in their respective disciplines. The limitations of…
Descriptors: Achievement Tests, Cognitive Ability, Cognitive Processes, Cognitive Style
Peer reviewed Peer reviewed
Green, Samuel B.; Halpin, Gerald – Research in Higher Education, 1977
Students rated the quality of the items on a classroom test taken previously and psychometric items were calculated. Results showed that student ratings were related to item difficulty and that better students rated items as less ambiguous. Ambiguity ratings were more highly related to the item-test correlations for better students. (Author/LBH)
Descriptors: Course Evaluation, High Achievement, Higher Education, Item Analysis
Peer reviewed Peer reviewed
McMorris, Robert F.; And Others – Journal of Educational Measurement, 1987
Consistency of gain from changing test answers was tested for students instructed about answer-changing research results, and composition of the gain was analyzed by examining the students' reasons for changing. Mean gain remained positive and consistent with gain for previously studied uninstructed groups; amount of change was also stable.…
Descriptors: Difficulty Level, Graduate Students, Higher Education, Instruction
Peer reviewed Peer reviewed
Benson, Jeri – Educational and Psychological Measurement, 1987
This study demonstrated the use of confirmatory factor analysis to investigate the existence of item bias in an affective scale measuring self-concept. Data were collected from three ethnic groups in grade eight. A differential response pattern found across the three groups indicated potential bias in score interpretation. (Author/LMO)
Descriptors: Factor Analysis, Junior High Schools, Racial Differences, Scaling
Sarvela, Paul D.; Noonan, John V. – Educational Technology, 1988
Describes measurement problems associated with computer based testing (CBT) programs when they are part of a computer assisted instruction curriculum. Topics discussed include CBT standards; selection of item types; the contamination of items that arise from test design strategies; and the non-equivalence of comparison groups in item analyses. (8…
Descriptors: Computer Assisted Instruction, Computer Assisted Testing, Item Analysis, Psychometrics
Peer reviewed Peer reviewed
Gitomer, Drew H.; And Others – Journal of Educational Psychology, 1987
Processing of verbal analogies was evaluated by recording eye fixation patterns during solution of problems that represented a broad range of difficulty. Findings on easier problems replicated previous work. On difficult items, high verbal ability individuals adapted processing strategies to a greater extent then did low ability students.…
Descriptors: Analogy, Difficulty Level, Eye Fixations, Higher Education
Peer reviewed Peer reviewed
Ramirez, Paul Michael – Reading Teacher, 1987
Reviews the criterion referenced test VICTA and concludes that it should be used with caution because of, among other things, the limited sample of questions per critical thinking skill, and the lack of a well integrated instructional program based on the test. (JC)
Descriptors: Cognitive Development, Criterion Referenced Tests, Critical Thinking, Elementary Secondary Education
Peer reviewed Peer reviewed
Pattison, Philippa; Grieve, Norma – Journal of Educational Psychology, 1984
A battery of spatial, linguistic, and mathematical tests was administered to tenth- and twelfth-grade students to examine the relation between sex differences on particular spatial tests and sex differences on particular mathematical problems. The sex difference magnitude was not diminished by taking spatial and linguistic scores into account.…
Descriptors: Achievement Tests, High Schools, Language Skills, Mathematics Achievement
Peer reviewed Peer reviewed
Hanna, Gerald S.; Bennett, Judith A. – Educational and Psychological Measurement, 1984
The presently viewed role and utility of measures of instructional sensitivity are summarized. A case is made that the rationale for the assessment of instructional sensitivity can be applied to all achievement tests and should not be restricted to criterion-referenced mastery tests. (Author/BW)
Descriptors: Achievement Tests, Context Effect, Criterion Referenced Tests, Mastery Tests
Peer reviewed Peer reviewed
Plake, Barbara S.; Huntley, Renee M. – Educational and Psychological Measurement, 1984
Two studies examined the effect of making the correct answer of a multiple choice test item grammatically consistent with the item. American College Testing Assessment experimental items were constructed to investigate grammatical compliance to investigate grammatical compliance for plural-singular and vowel-consonant agreement. Results suggest…
Descriptors: Grammar, Higher Education, Item Analysis, Multiple Choice Tests
Peer reviewed Peer reviewed
Griffiths, H. B.; McLone, R. R. – Educational Studies in Mathematics, 1984
Discusses what areas of ability are being assessed in timed, closed-book examinations and to what extent the procedure might foster possible educational objectives. Also describes a list of 10 "qualities" by which a mathematics question can be graded to see what skills are needed for its solution. (Author/JN)
Descriptors: College Mathematics, Evaluation Criteria, Evaluation Methods, Grading
Bailey, Alison L.; Stevens, Robin; Butler, Frances A.; Huang, Becky; Miyoshi, Judy N. – National Center for Research on Evaluation, Standards, and Student Testing (CRESST), 2005
The work we report focuses on utilizing linguistic profiles of mathematics, science and social studies textbook selections for the creation of reading test specifications. Once we determined that a text and associated tasks fit within the parameters established in Butler et al. (2004), they underwent both internal and external review by language…
Descriptors: Test Items, Profiles, Linguistics, Reading Tests
Arizona Department of Education, 2006
Arizona's Instrument to Measure Standards Dual Purpose Assessment (AIMS DPA) is a combination of two separate tests. One test is AIMS, which measures how well the student knows the reading, writing , and mathematics content that all Arizona students at the third grade level are expected to know and be able to do. AIMS includes multiple-choice…
Descriptors: Grade 3, Test Items, Scoring, Academic Standards
Pages: 1  |  ...  |  424  |  425  |  426  |  427  |  428  |  429  |  430  |  431  |  432  |  ...  |  636