NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1,591 to 1,605 of 3,093 results Save | Export
Peer reviewed Peer reviewed
Courtenay, Bradley C.; Weidman, Craig – Gerontologist, 1985
Undergraduates (N=141) completed different versions of Palmore's Facts on Aging (FAQ) quizzes to test effects of "don't know" (DK) answers. Findings suggest the DK option yields more accurate knowledge scores, eliminates guessing, enhances the use of FAQ as a research instrument and for pre/post evaluation of training in aging.…
Descriptors: Aging (Individuals), College Students, Educational Gerontology, Guessing (Tests)
Peer reviewed Peer reviewed
Chavez-Oller, Mary Anne; And Others – Language Learning, 1985
Considers whether scores on cloze items are generally sensitive to amounts of context in excess of 10 words on either side of them and, if not, when they are sensitive to long-range constraints. Concludes that some are sensitive to constraints that reach beyond 50 words on either side of a blank. (SED)
Descriptors: Cloze Procedure, Context Clues, Language Research, Language Tests
Peer reviewed Peer reviewed
Pratt, C.; Hacker, R. G. – Educational and Psychological Measurement, 1984
A unidimensional latent trait model was used to test a single-factor hypothesis of the Lawson Classroom Test of Formal Reasoning. The test failed to provide a valid measure of formal reasoning. This was a result of test format which neglected aspects of formal reasoning emphasized by Inhelder and Piaget. (Author/DWH)
Descriptors: Cognitive Processes, Group Testing, Higher Education, Latent Trait Theory
Peer reviewed Peer reviewed
Spurgin, C. B. – Physics Education, 1985
Discusses issues related to examination questions which begin by asking students to "Describe an experiment to..." Indicates that this strategy is useful when focusing on important quantities/phenomena or "celebrated" experiments and that examining boards should not request students to describe experiments which verify or…
Descriptors: Physics, Science Education, Science Experiments, Science Tests
Peer reviewed Peer reviewed
Bieliauskas, Vytautas J.; Farragher, John – Journal of Clinical Psychology, 1983
Administered the House-Tree-Person test to male college students (N=24) to examine the effects of varying the size of the drawing form on the scores. Results suggested that use of the drawing sheet did not have a significant influence upon the quantitative aspects of the drawing. (LLL)
Descriptors: College Students, Higher Education, Intelligence Tests, Males
Peer reviewed Peer reviewed
Katz, Barry M.; McSweeney, Maryellen – Journal of Experimental Education, 1984
This paper developed and illustrated a technique to analyze categorical data when subjects can appear in any number of categories for multigroup designs. Post hoc procedures to be used in conjunction with the presented statistical test are also developed. The technique is a large sample technique whose small sample properties are as yet unknown.…
Descriptors: Data Analysis, Hypothesis Testing, Mathematical Models, Research Methodology
Peer reviewed Peer reviewed
Sanjivamurthy, P.T.; Kumar, V.K. – Contemporary Educational Psychology, 1983
After six weeks of testing college algebra students (n=84) either on recall or recognition tests, the test modes were changed without warning. Results showed that performance suffered when the test mode was changed for students anticipating a recognition test. Students anticipating a recall test did equally well in both test modes. (Author/PN)
Descriptors: Algebra, Higher Education, Long Term Memory, Recall (Psychology)
Peer reviewed Peer reviewed
Kiewra, Kenneth A. – Contemporary Educational Psychology, 1983
No differences in immediate recognition performance were found for 30 undergraduate students who reorganized notes into an instructor-generated matrix versus subjects who reviewed in their typical manner. Reorganization during review resulted in relatively higher achievement on a free recall test, while unstructured review produced higher…
Descriptors: Cues, Encoding (Psychology), Higher Education, Notetaking
Swygert, Kimberly A. – 2003
In this study, data from an operational computerized adaptive test (CAT) were examined in order to gather information concerning item response times in a CAT environment. The CAT under study included multiple-choice items measuring verbal, quantitative, and analytical reasoning. The analyses included the fitting of regression models describing the…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Response Theory, Participant Characteristics
DeVito, Pasquale J., Ed.; Koenig, Judith A., Ed. – 2001
A committee of the National Research Council studied the desirability, feasibility, and potential impact of two reporting practices for National Assessment of Educational Progress (NAEP) results: district-level reporting and market-basket reporting. NAEP's sponsors believe that reporting district-level NAEP results would support state and local…
Descriptors: Elementary Secondary Education, Research Methodology, Research Reports, School Districts
Tobias, Sheila; Raphael, Jacqueline – 1997
This volume, part two of "The Hidden Curriculum," is premised on the belief that testing practices influence educational procedures and learning outcomes. Graduate level science educators shared their assessment techniques in terms of the following categories: (1) exam design; (2) exam format; (3) exam environment; and (4) grading practices.…
Descriptors: College Science, Educational Change, Evaluation, Higher Education
Woldbeck, Tanya – 1998
This paper summarizes some of the basic concepts in test equating. Various types of equating methods, as well as data collection designs, are outlined, with attempts to provide insight into preferred methods and techniques. Test equating describes a group of methods that enable test constructors and users to compare scores from two different forms…
Descriptors: Comparative Analysis, Data Collection, Difficulty Level, Equated Scores
Schulz, E. Matthew; Wang, Lin – 2001
In this study, items were drawn from a full-length test of 30 items in order to construct shorter tests for the purpose of making accurate pass/fail classifications with regard to a specific criterion point on the latent ability metric. A three-item parameter Item Response Theory (IRT) framework was used. The criterion point on the latent ability…
Descriptors: Ability, Classification, Item Response Theory, Pass Fail Grading
Kame'enui, Edward; Simmons, Deborah; Cornachione, Cheri – 2001
This guide is designed to provide teachers and reading tutors with an easy-to-use and practical guide to selecting and using reading assessment tools that (1) provides descriptions of reading assessments for English and Spanish speaking students that can be used to diagnose and identify their reading skills and abilities; (2) helps teachers find…
Descriptors: Elementary Education, English, Reading Tests, Spanish Speaking
Cole, Rebecca Pollard; MacIsaac, Dan; Cole, David M. – 2001
The purpose of this study (1,313 college student participants) was to examine the differences in paper-based and Web-based administrations of a commonly used assessment instrument, the Force Concept Inventory (FCI) (D. Hestenes, M. Wells, and G. Swackhamer, 1992). Results demonstrated no appreciable difference on FCI scores or FCI items based on…
Descriptors: College Students, Computer Assisted Testing, Higher Education, Physics
Pages: 1  |  ...  |  103  |  104  |  105  |  106  |  107  |  108  |  109  |  110  |  111  |  ...  |  207