NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)1
Since 2006 (last 20 years)2
Education Level
Location
Australia1
Laws, Policies, & Programs
Assessments and Surveys
Minnesota Multiphasic…1
What Works Clearinghouse Rating
Showing 1 to 15 of 33 results Save | Export
Russell, C.; Ward, C.; Harms, A.; St. Martin, K.; Cusumano, D.; Fixsen, D.; Levy, R.; LeVesseur, C. – National Implementation Research Network, 2016
The purpose of the District Capacity Assessment (DCA) Technical Manual is to provide background information on the technical adequacy of the DCA (Ward et al., 2015). This current version draws upon a rich history and background of previous work assessing district capacity. Notably, the current version includes significant modifications from…
Descriptors: School Districts, Program Implementation, Educational Innovation, Educational Change
Perie, Marianne, Ed. – Brookes Publishing Company, 2010
For lower-achieving students with disabilities, effective and appropriate alternate assessment based on modified achievement standards (AA-MAS) can open the door to greater expectations and opportunities. State policymakers have the option of providing certain students who have disabilities with AA-MAS aligned with grade-level content--and now…
Descriptors: Alternative Assessment, Low Achievement, Disabilities, Academic Standards
Gott, Richard; Duggan, Sandra – 2003
The basic understanding which underlies scientific evidence--ideas such as the structure of experiments, causality, repeatability, validity and reliability--is not straightforward. But these ideas are needed to judge evidence in school science, in physics or chemistry or biology or psychology, in undergraduate science, and in understanding…
Descriptors: Evaluation, Evaluation Criteria, Evaluation Methods, Investigations
Peer reviewed Peer reviewed
Greene, David; David, Jane L. – Evaluation and Program Planning: An International Journal, 1984
The main features of a multiple site, structured case study design are presented. The nature of explanatory patterns, how case study investigators pursue and recognize valid patterns, and how an analyst can apply the same logic to cross-site analysis are discussed. (Author/BW)
Descriptors: Case Studies, Evaluation Methods, Generalization, Research Design
Peer reviewed Peer reviewed
Altschuld, J. W.; Hines, C. V. – Educational Evaluation and Policy Analysis, 1982
External and implementation factors and their potential threats to field test validity and the practical realities of the field testing process on results are discussed. External factors include site selection, negotiations, contracts, site monitoring, training, and type of product. Implementation factors include motivation, dependence on local…
Descriptors: Evaluation Methods, Field Tests, Program Implementation, Research Methodology
Templin, Patricia A. – 1981
This handbook is intended to help educational evaluators use still photography in designing, conducting, and reporting evaluations of educational programs. It describes techniques for using a visual documentary approach to program evaluation that features data collected with a camera. The emphasis is on the aspects of educational evaluation…
Descriptors: Data Collection, Elementary Secondary Education, Evaluation Methods, Photography
Peer reviewed Peer reviewed
Schoenfeldt, Lyle F.; Jansen, Karen J. – Journal of Creative Behavior, 1997
Discusses the definition of creativity and the broad methodological issues associated with organizational creativity. Reviews the most relevant theoretical models for studying creativity in organizations. Specific methodological requirements for studying creativity in organizations are then discussed, with emphasis on issues of validity and…
Descriptors: Creativity, Evaluation Methods, Institutional Characteristics, Models
Lindvall, C. Mauritz – 1979
Evaluation studies on educational questions attempt to provide answers in the form of conclusions or inferences which are derived from the information collected. Valid inferences are a result of careful research design for the study, and may be causal, descriptive, value-oriented, or probabilistic. Basic steps in designing an evaluation study are…
Descriptors: Educational Assessment, Educational Research, Evaluation, Evaluation Methods
Peer reviewed Peer reviewed
Huerta-Macias, Ana – TESOL Journal, 1995
Discusses the use of alternative assessment procedures in English-as-a-Second-Language classrooms, focusing on three issues: (1) definitions of alternative assessment; (2) issues related to validity, reliability, and objectivity that are often raised as objections to alternative assessment; and (3) the power of alternative assessment to provide…
Descriptors: Alternative Assessment, Definitions, English (Second Language), Evaluation Methods
Dawson, Judith A. – 1982
This paper is based on the premise that relatively little is known about how to improve validity in qualitative research and less is known about how to estimate validity in studies conducted by others. The purpose of the study was to describe the conceptualization of validity in qualitative inquiry to determine how it was used by the author of a…
Descriptors: Data Analysis, Data Collection, Educational Research, Evaluation Methods
Berube, Jean E.; Mark, Jorie Lester, Ed. – 1981
Evaluation design is discussed in terms of conditions that an adult education intervention (product, practice) must meet to get Joint Dissemination and Review Panel (JDRP) approval. (Effectiveness, the sole criterion for JDRP approval, must be established by evaluation data adequate to tie the project and desired impact together in a…
Descriptors: Adult Education, Demonstration Programs, Educational Improvement, Evaluation Criteria
Henerson, Marlene E.; Morris, Lynn Lyons; Fitz-Gibbon, Carol Taylor – 1987
The "CSE Program Evaluation Kit" is a series of nine books intended to assist people conducting program evaluations. This volume, sixth in the kit, is designed to help an evaluator select or design credible instruments to measure attitudes. The book discusses problems involved in measuring attitudes, including people's sensitivity about this kind…
Descriptors: Attitude Change, Attitude Measures, Evaluation Methods, Interviews
Archer, Robert P.; Krishnamurthy, Radhika – 2002
This book is designed to provide fundamental information concerning the procedures necessary to administer, score, interpret, and report findings from the Minnesota Multiphasic Personality Inventory-Adolescent[TM] (MMPI-A), the most widely used objective personality assessment instrument for adolescents. The chapters are: (1) "Overview";…
Descriptors: Adolescents, Diagnostic Tests, Evaluation Methods, Personality Assessment
Peer reviewed Peer reviewed
Ryser, Gail R. – Journal of Secondary Gifted Education, 1994
The meanings of reliability and validity as they apply to standardized measures are used as a framework for applying the concepts of reliability and validity to authentic assessments. This article sees reliability as scorability and stability, whereas validity is seen as students' ability to use knowledge authentically in the field. (DB)
Descriptors: Elementary Secondary Education, Evaluation Methods, Performance Based Assessment, Reliability
Fink, Arlene – 1995
The nine-volume Survey Kit is designed to help readers prepare and conduct surveys and become better users of survey results. All the books in the series contain instructional objectives, exercises and answers, examples of surveys in use, illustrations of survey questions, guidelines for action, checklists of "dos and don'ts," and…
Descriptors: Costs, Data Collection, Educational Research, Evaluation Methods
Previous Page | Next Page ยป
Pages: 1  |  2  |  3