Descriptor
Author
Publication Type
ERIC Publications | 14 |
ERIC Digests in Full Text | 8 |
Reference Materials -… | 4 |
Collected Works - Serials | 2 |
Information Analyses | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Osborne, Jason W.; Waters, Elaine – 2002
This Digest presents a discussion of the assumptions of multiple regression that is tailored to the practicing researcher. The focus is on the assumptions of multiple regression that are not robust to violation, and that researchers can deal with if violated. Assumptions of normality, linearity, reliability of measurement, and homoscedasticity are…
Descriptors: Error of Measurement, Nonparametric Statistics, Regression (Statistics), Reliability
Dayton, C. Mitchell – 2002
This Digest, intended as an instructional aid for beginning research students and a refresher for researchers in the field, identifies key factors that play a critical role in determining the credibility that should be given to a specific research study. The needs for empirical research, randomization and control, and significance testing are…
Descriptors: Credibility, Data Analysis, Reliability, Research
Childs, Ruth A.; Jaciw, Andrew P. – 2003
Matrix sampling of test items, the division of a set of items into different versions of a test form, is used by several large-scale testing programs. This Digest discusses nine categories of costs associated with matrix sampling. These categories are: (1) development costs; (2) materials costs; (3) administration costs; (4) educational costs; (5)…
Descriptors: Costs, Matrices, Reliability, Sampling
Smith, Carl B., Ed. – 2003
This topical bibliography and commentary addresses methods of alternative, or informal, assessment that have proven effective, and how these methods have been, or can be, established as reliable alternatives to standardized testing. It discusses authentic assessment, forms of alternative assessment (unstructured written activities, unstructured…
Descriptors: Alternative Assessment, Elementary Secondary Education, Evaluation Methods, Reliability
Brualdi, Amy – 1999
Test validity refers to the degree to which the inferences based on test scores are meaningful, useful, and appropriate. Thus, test validity is a characteristic of a test when it is administered to a particular population. This article introduces the modern concepts of validity advanced by S. Messick (1989, 1996, 1996). Traditionally, the means of…
Descriptors: Criteria, Data Interpretation, Elementary Secondary Education, Reliability
Rudner, Lawrence M.; Schafer, William D. – 2001
This digest discusses sources of error in testing, several approaches to estimating reliability, and several ways to increase test reliability. Reliability has been defined in different ways by different authors, but the best way to look at reliability may be the extent to which measurements resulting from a test are characteristics of those being…
Descriptors: Educational Testing, Error of Measurement, Reliability, Scores
ERIC Clearinghouse on Higher Education, Washington, DC. – 2002
This Critical Issue Bibliography (CRIB) Sheet presents resources on college rankings publications, criticisms of rankings methodology, effects of the rankings on the public, and alternatives to the major rankings guides. The annotated bibliography lists 5 Internet resources and 17 other resources, all of which are in the ERIC database. (SLD)
Descriptors: Annotated Bibliographies, Colleges, Evaluation Methods, Higher Education
Coburn, Louisa – 1984
Research on student evaluation of college teachers' performance is briefly summarized. Lawrence M. Aleamoni offers four arguments in favor of student ratings: (1) students are the main source of information about the educational environment; (2) students are the most logical evaluators of student satisfaction and effectiveness of course elements;…
Descriptors: College Faculty, Evaluation Problems, Evaluation Utilization, Higher Education
Johns, Jerry; VanLeirsburg, Peggy – 1989
This annotated bibliography of materials in the ERIC database contains 30 annotations (dating from 1974 to 1989) on informal reading inventories (IRIs). The citations were selected to help professionals understand the history of, the uses of, and the issues surrounding IRIs. The major sections of the bibliography are: Overview, General Uses,…
Descriptors: Annotated Bibliographies, Elementary Secondary Education, Informal Reading Inventories, Reading Diagnosis
Lomawaima, K. Tsianina; McCarty, Teresa L. – 2002
The constructs used to evaluate research quality--valid, objective, reliable, generalizable, randomized, accurate, authentic--are not value-free. They all require human judgment, which is affected inevitably by cultural norms and values. In the case of research involving American Indians and Alaska Natives, assessments of research quality must be…
Descriptors: Action Research, American Indian Education, Educational Research, Indigenous Knowledge
Brown, Bettina Lankard – 1997
Portfolio assessment is an alternative form of assessment that is particularly attractive to adult, career, and vocational educators because it includes the assessment of active learning and performance rather than the mere recall of memorized facts. Portfolio assessment serves the interests of business and industry by forging a connection between…
Descriptors: Adult Education, Career Education, Educational Trends, Evaluation Methods
Overall, Jesse U., IV; Marsh, Herbert W. – AAHE Bulletin, 1982
Recent research (1978-1982) on student evaluations of teaching is reviewed, including: influence of background variables pertaining to the student, the teacher, and the learning environment; the dimensions of the teaching being evaluated; the validity of students' evaluations; the "Doctor Fox" effect and its implications for validity; the…
Descriptors: College Faculty, Educational Research, Evaluation Criteria, Evaluation Methods
Rudner, Lawrence M., Ed.; Schafer, William D., Ed. – Practical Assessment, Research & Evaluation, 2001
This document consists of papers published in the electronic journal "Practical Assessment, Research & Evaluation" during 2000-2001: (1) "Advantages of Hierarchical Linear Modeling" (Jason W. Osborne); (2) "Prediction in Multiple Regression" (Jason W. Osborne); (3) Scoring Rubrics: What, When, and How?"…
Descriptors: Educational Assessment, Educational Research, Elementary Secondary Education, Evaluation Methods
Haskell, Robert E. – 1998
Despite a history of conflicting research on its reliability and validity, student evaluation of faculty (SEF) has typically not been viewed as an infringement on academic freedom; it has generally been taken for granted that SEF is appropriate and necessary. However, informal and reasoned analyses of the issue indicate that because SEF is used…
Descriptors: Academic Freedom, Evaluation Problems, Faculty College Relationship, Faculty Evaluation