NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 15 results Save | Export
Dayton, C. Mitchell – 2002
This Digest, intended as an instructional aid for beginning research students and a refresher for researchers in the field, identifies key factors that play a critical role in determining the credibility that should be given to a specific research study. The needs for empirical research, randomization and control, and significance testing are…
Descriptors: Credibility, Data Analysis, Reliability, Research
Childs, Ruth A.; Jaciw, Andrew P. – 2003
Matrix sampling of test items, the division of a set of items into different versions of a test form, is used by several large-scale testing programs. This Digest discusses nine categories of costs associated with matrix sampling. These categories are: (1) development costs; (2) materials costs; (3) administration costs; (4) educational costs; (5)…
Descriptors: Costs, Matrices, Reliability, Sampling
Brualdi, Amy – 1999
Test validity refers to the degree to which the inferences based on test scores are meaningful, useful, and appropriate. Thus, test validity is a characteristic of a test when it is administered to a particular population. This article introduces the modern concepts of validity advanced by S. Messick (1989, 1996, 1996). Traditionally, the means of…
Descriptors: Criteria, Data Interpretation, Elementary Secondary Education, Reliability
Coburn, Louisa – 1984
Research on student evaluation of college teachers' performance is briefly summarized. Lawrence M. Aleamoni offers four arguments in favor of student ratings: (1) students are the main source of information about the educational environment; (2) students are the most logical evaluators of student satisfaction and effectiveness of course elements;…
Descriptors: College Faculty, Evaluation Problems, Evaluation Utilization, Higher Education
Johns, Jerry; VanLeirsburg, Peggy – 1989
This annotated bibliography of materials in the ERIC database contains 30 annotations (dating from 1974 to 1989) on informal reading inventories (IRIs). The citations were selected to help professionals understand the history of, the uses of, and the issues surrounding IRIs. The major sections of the bibliography are: Overview, General Uses,…
Descriptors: Annotated Bibliographies, Elementary Secondary Education, Informal Reading Inventories, Reading Diagnosis
Lomawaima, K. Tsianina; McCarty, Teresa L. – 2002
The constructs used to evaluate research quality--valid, objective, reliable, generalizable, randomized, accurate, authentic--are not value-free. They all require human judgment, which is affected inevitably by cultural norms and values. In the case of research involving American Indians and Alaska Natives, assessments of research quality must be…
Descriptors: Action Research, American Indian Education, Educational Research, Indigenous Knowledge
Scriven, Michael – 1995
Student ratings of instruction are widely used as a basis for personnel decisions and faculty development recommendations. This digest discusses concerns about the validity of student ratings and presents a case for their use in teacher evaluation. There are several strong arguments for using student ratings to evaluate teachers. Students are in a…
Descriptors: College Faculty, College Students, Data Collection, Decision Making
Yuker, Harold E. – 1984
Kinds of faculty workload data that can be obtained from college and faculty reports are examined, along with potential problems in workload studies. A main research concern is deciding which faculty activities should be considered as workload. Types of data that are sometimes used in colleges' faculty workload formulas concern student credit…
Descriptors: College Faculty, Faculty Workload, Higher Education, Institutional Research
Helberg, Clay – 1996
Abuses and misuses of statistics are frequent. This digest attempts to warn against these in three broad classes of pitfalls: sources of bias, errors of methodology, and misinterpretation of results. Sources of bias are conditions or circumstances that affect the external validity of statistical results. In order for a researcher to make…
Descriptors: Causal Models, Comparative Analysis, Data Analysis, Error of Measurement
Rudner, Lawrence M. – 1992
Several common sources of error in assessment that depends on the use of judges are identified, and ways to reduce the impact of rating errors are examined. Numerous threats to the validity of scores based on ratings exist. These threats include: (1) the halo effect; (2) stereotyping; (3) perception differences; (4) leniency/stringency error; and…
Descriptors: Alternative Assessment, Error of Measurement, Evaluation Methods, Evaluators
Benton, Sidney E. – 1982
Studies on the criterion validity of student evaluation-of-instruction instruments are analyzed, and recommendations are offered for future research into student evaluation of instruction. The main problem, and probably the reason for the lack of validity studies, is that it is difficult to agree on what the criteria of effective teaching should…
Descriptors: Academic Achievement, College Instruction, Criterion Referenced Tests, Educational Research
Overall, Jesse U., IV; Marsh, Herbert W. – AAHE Bulletin, 1982
Recent research (1978-1982) on student evaluations of teaching is reviewed, including: influence of background variables pertaining to the student, the teacher, and the learning environment; the dimensions of the teaching being evaluated; the validity of students' evaluations; the "Doctor Fox" effect and its implications for validity; the…
Descriptors: College Faculty, Educational Research, Evaluation Criteria, Evaluation Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rudner, Lawrence M., Ed.; Schafer, William D., Ed. – Practical Assessment, Research & Evaluation, 2001
This document consists of papers published in the electronic journal "Practical Assessment, Research & Evaluation" during 2000-2001: (1) "Advantages of Hierarchical Linear Modeling" (Jason W. Osborne); (2) "Prediction in Multiple Regression" (Jason W. Osborne); (3) Scoring Rubrics: What, When, and How?"…
Descriptors: Educational Assessment, Educational Research, Elementary Secondary Education, Evaluation Methods
Haskell, Robert E. – 1998
Despite a history of conflicting research on its reliability and validity, student evaluation of faculty (SEF) has typically not been viewed as an infringement on academic freedom; it has generally been taken for granted that SEF is appropriate and necessary. However, informal and reasoned analyses of the issue indicate that because SEF is used…
Descriptors: Academic Freedom, Evaluation Problems, Faculty College Relationship, Faculty Evaluation
Hutchinson, Nancy L. – 1995
As career development becomes established in Canadian secondary schools, the pressure increases to use performance assessments to demonstrate both the effectiveness of programs and the soundness of instructional decisions. This digest examines issues surrounding performance assessments of career development programs. Performance assessments can be…
Descriptors: Adolescents, Ancillary School Services, Canadian Studies, Career Counseling