Descriptor
Source
Author
Publication Type
ERIC Digests in Full Text | 20 |
ERIC Publications | 20 |
Guides - Non-Classroom | 1 |
Reports - Evaluative | 1 |
Education Level
Audience
Practitioners | 2 |
Administrators | 1 |
Policymakers | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Osborne, Jason W.; Waters, Elaine – 2002
This Digest presents a discussion of the assumptions of multiple regression that is tailored to the practicing researcher. The focus is on the assumptions of multiple regression that are not robust to violation, and that researchers can deal with if violated. Assumptions of normality, linearity, reliability of measurement, and homoscedasticity are…
Descriptors: Error of Measurement, Nonparametric Statistics, Regression (Statistics), Reliability
Dayton, C. Mitchell – 2002
This Digest, intended as an instructional aid for beginning research students and a refresher for researchers in the field, identifies key factors that play a critical role in determining the credibility that should be given to a specific research study. The needs for empirical research, randomization and control, and significance testing are…
Descriptors: Credibility, Data Analysis, Reliability, Research
Childs, Ruth A.; Jaciw, Andrew P. – 2003
Matrix sampling of test items, the division of a set of items into different versions of a test form, is used by several large-scale testing programs. This Digest discusses nine categories of costs associated with matrix sampling. These categories are: (1) development costs; (2) materials costs; (3) administration costs; (4) educational costs; (5)…
Descriptors: Costs, Matrices, Reliability, Sampling
Brualdi, Amy – 1999
Test validity refers to the degree to which the inferences based on test scores are meaningful, useful, and appropriate. Thus, test validity is a characteristic of a test when it is administered to a particular population. This article introduces the modern concepts of validity advanced by S. Messick (1989, 1996, 1996). Traditionally, the means of…
Descriptors: Criteria, Data Interpretation, Elementary Secondary Education, Reliability
Rudner, Lawrence M.; Schafer, William D. – 2001
This digest discusses sources of error in testing, several approaches to estimating reliability, and several ways to increase test reliability. Reliability has been defined in different ways by different authors, but the best way to look at reliability may be the extent to which measurements resulting from a test are characteristics of those being…
Descriptors: Educational Testing, Error of Measurement, Reliability, Scores
Thompson, Richard T.; Johnson, Dora E. – 1988
Efforts to expand the generic language proficiency guidelines of the American Council on the Teaching of Foreign Languages (ACTFL) to the less commonly taught languages (LCTLs) began when developers realized that the ACTFL guidelines were too Eurocentric; the guidelines included grammatical categories specific to Western European languages and…
Descriptors: Cultural Context, Interrater Reliability, Language Proficiency, Language Tests
Coburn, Louisa – 1984
Research on student evaluation of college teachers' performance is briefly summarized. Lawrence M. Aleamoni offers four arguments in favor of student ratings: (1) students are the main source of information about the educational environment; (2) students are the most logical evaluators of student satisfaction and effectiveness of course elements;…
Descriptors: College Faculty, Evaluation Problems, Evaluation Utilization, Higher Education
Elliott, Stephen N. – 1995
This digest offers principles of performance assessment as an alternative to norm-referenced tests. The definition of performance assessment developed by the U.S. Congress's Office of Technology and Assessment is given, common features are listed, and the terms "performance" and "authentic" are defined. Suggested guidelines for…
Descriptors: Definitions, Elementary Secondary Education, Evaluation Methods, Guidelines
Lomawaima, K. Tsianina; McCarty, Teresa L. – 2002
The constructs used to evaluate research quality--valid, objective, reliable, generalizable, randomized, accurate, authentic--are not value-free. They all require human judgment, which is affected inevitably by cultural norms and values. In the case of research involving American Indians and Alaska Natives, assessments of research quality must be…
Descriptors: Action Research, American Indian Education, Educational Research, Indigenous Knowledge
Rudner, Lawrence M. – 1994
The "Standards for Educational and Psychological Testing" of the American Educational Research Association, the American Psychological Association, and the National Council on Measurement in Education are intended to provide a comprehensive basis for evaluating tests. This digest identifies key standards applicable to most test…
Descriptors: Ability, Academic Achievement, Evaluation Methods, Norms
Thompson, Bruce – 1995
The research literature provides important guidance to counselors working to keep abreast of the latest thinking regarding best practices and recently developed counseling tools. The purpose of this digest is to highlight a few errors that seem to recur within the literature, and to provide some helpful references that further explore these…
Descriptors: Counseling, Educational Researchers, Evaluation Methods, Evaluation Problems
Rudner, Lawrence M. – 1996
In educational research and evaluation, a sample of subjects usually received some type of programmatic treatment. Outcome scores for these students are then compared with outcome scores of a control or comparison group. M. Lewis and H. McGurk (1972) have pointed out that there are some implicit assumptions when this approach is applied to…
Descriptors: Child Development, Cognitive Development, Early Childhood Education, Educational Research
Perrone, Vito – 1991
This ERIC Digest was adapted from the Association for Childhood Education International's (ACEI) 1991 position paper on standardized testing. Since the publication of "A Nation at Risk" in 1983, standardized testing programs have expanded greatly. Tests may be of pencil-and-paper or performance-oriented varieties. The purposes of tests…
Descriptors: Academic Achievement, Accountability, Elementary Education, Elementary School Students
Crafts, Jennifer – 1991
Biographical inventory is a selection device used as an alternative or supplement to cognitive testing because this measurement method predicts aspects of job performance that are not predicted by cognitive measures. Some of the issues and concerns about using biographical inventories are discussed. The use of biographical inventories (biodata) is…
Descriptors: Biographical Inventories, Cognitive Tests, Data Collection, Individual Characteristics
Mead, Nancy A.; Rubin, Donald L. – 1985
Intended for administrators and policymakers as well as teachers, this digest explores methods of listening and speaking skills assessment. The digest first provides a rationale for teaching and assessing listening and speaking skills. It then examines definitions of oral communication and listening, noting (1) the trend toward defining oral…
Descriptors: Communication Skills, Elementary Secondary Education, Listening Comprehension Tests, Listening Skills
Previous Page | Next Page ยป
Pages: 1 | 2