NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 166 results Save | Export
Paul J. Dizona – ProQuest LLC, 2022
Missing data is a common challenge to any researcher in almost any field of research. In particular, human participants in research do not always respond or return for assessments leaving the researcher to rely on missing data methods. The most common methods (i.e., Multiple Imputation and Full Information Maximum Likelihood) assume that the…
Descriptors: Pretests Posttests, Research Design, Research Problems, Dropouts
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Weidlich, Joshua; Gaševic, Dragan; Drachsler, Hendrik – Journal of Learning Analytics, 2022
As a research field geared toward understanding and improving learning, Learning Analytics (LA) must be able to provide empirical support for causal claims. However, as a highly applied field, tightly controlled randomized experiments are not always feasible nor desirable. Instead, researchers often rely on observational data, based on which they…
Descriptors: Causal Models, Inferences, Learning Analytics, Comparative Analysis
Hughes, Katherine L.; Miller, Trey; Reese, Kelly – Grantee Submission, 2021
This report from the Career and Technical Education (CTE) Research Network Lead team provides final results from an evaluability assessment of CTE programs that feasibly could be evaluated using a rigorous experimental design. Evaluability assessments (also called feasibility studies) are used in education and other fields, such as international…
Descriptors: Program Evaluation, Vocational Education, Evaluation Methods, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Wing, Coady; Bello-Gomez, Ricardo A. – American Journal of Evaluation, 2018
Treatment effect estimates from a "regression discontinuity design" (RDD) have high internal validity. However, the arguments that support the design apply to a subpopulation that is narrower and usually different from the population of substantive interest in evaluation research. The disconnect between RDD population and the…
Descriptors: Regression (Statistics), Research Design, Validity, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Berge, Maria; Ingerman, Åke – Research in Science & Technological Education, 2017
Background: In science education today, there is an emerging focus on what is happening in situ, making use of an array of analytical traditions. Common practice is to use one specific analytical framing within a research project, but there are projects that make use of multiple analytical framings to further the understanding of the same data,…
Descriptors: Group Discussion, Research Methodology, Educational Research, Science Education
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lemons, Mary A. – Journal of Learning in Higher Education, 2014
Assessment of learning has become very important for government, universities and accrediting agencies. In this article, two variables are examined, leadership and teamwork, in the context of a survey used by one mid-south university for assessment purposes. This survey demonstrates the problems that arise when the sequential steps of the research…
Descriptors: Research Design, Student Surveys, Student Evaluation, Outcome Measures
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Alresheed, Fahad; Hott, Brittany L.; Bano, Carmen – Journal of Special Education Apprenticeship, 2013
Historically, the synthesis of single subject design has employed visual inspection to yield significance of results. However, current research is supporting different techniques that will facilitate the interpretation of these intervention outcomes. These methods can provide more reliable data than employing visual inspection in isolation. This…
Descriptors: Synthesis, Research Methodology, Research Design, Intervention
Reardon, Sean F. – Society for Research on Educational Effectiveness, 2010
Instrumental variable estimators hold the promise of enabling researchers to estimate the effects of educational treatments that are not (or cannot be) randomly assigned but that may be affected by randomly assigned interventions. Examples of the use of instrumental variables in such cases are increasingly common in educational and social science…
Descriptors: Social Science Research, Least Squares Statistics, Computation, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Glasgow, Russell E. – Research on Social Work Practice, 2009
This article summarizes critical evaluation needs, challenges, and lessons learned in translational research. Evaluation can play a key role in enhancing successful application of research-based programs and tools as well as informing program refinement and future research. Discussion centers on what is unique about evaluating programs and…
Descriptors: Evaluation Needs, Evaluation Methods, Measurement Objectives, Measurement Techniques
Peer reviewed Peer reviewed
Shapiro, Jonathan – Educational Evaluation and Policy Analysis, 1982
In an effort to maintain internal validity while maximizing the generalizability of evaluation results, the theory-testing methodology that links program activities to outcomes (program theory) is proposed. To evaluate a program by explicating and testing the program theory is to perform evaluation as theory testing. (PN)
Descriptors: Evaluation Methods, Quasiexperimental Design, Research Design, Research Problems
Crooks, Terence J. – 1982
A conceptualization of generalization and its place in educational research is proposed. Problems with existing approaches to generalization include limited concepts, misleading or simplistic results, perception variance or change, and the discouragement of sensitivity due to convenience. Researchers should distinguish between conclusions which…
Descriptors: Educational Research, Evaluation Methods, Generalization, Research Design
Lovelace, Terry – 1982
Age is often ranked as a significant variable in research studies concerned with the measurement of intellectual and linguistic capacities, including reading. In fact, age is often cited as the cause of a decline in performance on such tasks. However, determining the social, psycho-social, and physiological status of subjects and applying…
Descriptors: Data Collection, Evaluation Methods, Gerontology, Older Adults
Peer reviewed Peer reviewed
Jacobs, David – Journal of Humanistic Psychology, 1981
Manipulated two aspects of the general empathy training situation in an attempt to produce higher levels of "empathic responding" following a brief training program. Results showed the experimental manipulations increased more empathic responses in the group asked to display competence than in the group asked to learn "empathic responding."…
Descriptors: Communication Skills, Empathy, Evaluation Methods, Research Design
Peer reviewed Peer reviewed
Heilman, John G. – Evaluation Review, 1980
The choice between experimental research or process-oriented oriented research as the only valid paradigm of evaluation research is rejected. It is argued that there is a middle ground. Suggestions are made for mixing the two approaches to suit particular research settings. (Author/GK)
Descriptors: Evaluation Methods, Evaluative Thinking, Models, Program Evaluation
Peer reviewed Peer reviewed
Leukefeld, Carl G.; Bukoski, William J. – Journal of Drug Education, 1991
Notes inconsistencies in drug abuse prevention research findings related to such issues as study design and methodology. Presents consensus recommendations made by drug abuse prevention researchers and practitioners who met at National Institute on Drug Abuse in 1989. Includes specific recommendations directed to modifying prevention approaches;…
Descriptors: Drug Abuse, Evaluation Methods, Evaluation Problems, Prevention
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  12