NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 85 results Save | Export
Elizabeth Talbott; Andres De Los Reyes; Devin M. Kearns; Jeannette Mancilla-Martinez; Mo Wang – Exceptional Children, 2023
Evidence-based assessment (EBA) requires that investigators employ scientific theories and research findings to guide decisions about what domains to measure, how and when to measure them, and how to make decisions and interpret results. To implement EBA, investigators need high-quality assessment tools along with evidence-based processes. We…
Descriptors: Evidence Based Practice, Evaluation Methods, Special Education, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Solmeyer, Anna R.; Constance, Nicole – American Journal of Evaluation, 2015
Traditionally, evaluation has primarily tried to answer the question "Does a program, service, or policy work?" Recently, more attention is given to questions about variation in program effects and the mechanisms through which program effects occur. Addressing these kinds of questions requires moving beyond assessing average program…
Descriptors: Program Effectiveness, Program Evaluation, Program Content, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Evans, C.; Kandiko Howson, C.; Forsythe, A. – Higher Education Pedagogies, 2018
Internationally, the political appetite for educational measurement capable of capturing a metric of value for money and effectiveness has momentum. While most would agree with the need to assess costs relevant to quality to help support better governmental policy decisions about public spending, poorly understood measurement comes with unintended…
Descriptors: Higher Education, Achievement Gains, Political Issues, Quality Assurance
Peer reviewed Peer reviewed
Direct linkDirect link
St.Clair, Travis; Cook, Thomas D.; Hallberg, Kelly – American Journal of Evaluation, 2014
Although evaluators often use an interrupted time series (ITS) design to test hypotheses about program effects, there are few empirical tests of the design's validity. We take a randomized experiment on an educational topic and compare its effects to those from a comparative ITS (CITS) design that uses the same treatment group as the experiment…
Descriptors: Time, Evaluation Methods, Measurement Techniques, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Lin, Yi-Chun; Hsieh, Ya-Hui; Hou, Huei-Tse – Journal of Educational Computing Research, 2015
The development of a usability evaluation method for educational systems or applications, called the self-report-based sequential analysis, is described herein. The method aims to extend the current practice by proposing self-report-based sequential analysis as a new usability method, which integrates the advantages of self-report in survey…
Descriptors: Foreign Countries, Elementary School Students, Educational Technology, Sequential Approach
Peer reviewed Peer reviewed
Direct linkDirect link
Braverman, Marc T. – American Journal of Evaluation, 2013
Sound evaluation planning requires numerous decisions about how constructs in a program theory will be translated into measures and instruments that produce evaluation data. This article, the first in a dialogue exchange, examines how decisions about measurement are (and should be) made, especially in the context of small-scale local program…
Descriptors: Evaluation Methods, Methods Research, Research Methodology, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Losinski, Mickey; Maag, John W.; Katsiyannis, Antonis; Ennis, Robin Parks – Exceptional Children, 2014
Interventions based on the results of functional behavioral assessment (FBA) have been the topic of extensive research and, in certain cases, mandated for students with disabilities under the Individuals With Disabilities Education Act. There exist a wide variety of methods for conducting such assessments, with little consensus in the field. The…
Descriptors: Intervention, Predictor Variables, Program Effectiveness, Educational Quality
Peer reviewed Peer reviewed
Direct linkDirect link
Kratochwill, Thomas R.; Levin, Joel R. – Psychological Methods, 2010
In recent years, single-case designs have increasingly been used to establish an empirical basis for evidence-based interventions and techniques in a variety of disciplines, including psychology and education. Although traditional single-case designs have typically not met the criteria for a randomized controlled trial relative to conventional…
Descriptors: Research Design, Intervention, Evidence, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Feingold, Alan – Psychological Methods, 2009
The use of growth-modeling analysis (GMA)--including hierarchical linear models, latent growth models, and general estimating equations--to evaluate interventions in psychology, psychiatry, and prevention science has grown rapidly over the last decade. However, an effect size associated with the difference between the trajectories of the…
Descriptors: Control Groups, Effect Size, Raw Scores, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Killeen, Peter R. – Psychological Methods, 2010
Lecoutre, Lecoutre, and Poitevineau (2010) have provided sophisticated grounding for "p[subscript rep]." Computing it precisely appears, fortunately, no more difficult than doing so approximately. Their analysis will help move predictive inference into the mainstream. Iverson, Wagenmakers, and Lee (2010) have also validated…
Descriptors: Replication (Evaluation), Measurement Techniques, Research Design, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Overall, John E.; Tonidandel, Scott – Multivariate Behavioral Research, 2010
A previous Monte Carlo study examined the relative powers of several simple and more complex procedures for testing the significance of difference in mean rates of change in a controlled, longitudinal, treatment evaluation study. Results revealed that the relative powers depended on the correlation structure of the simulated repeated measurements.…
Descriptors: Monte Carlo Methods, Statistical Significance, Correlation, Depression (Psychology)
Peer reviewed Peer reviewed
Direct linkDirect link
Lecoutre, Bruno; Lecoutre, Marie-Paule; Poitevineau, Jacques – Psychological Methods, 2010
P. R. Killeen's (2005a) probability of replication ("p[subscript rep]") of an experimental result is the fiducial Bayesian predictive probability of finding a same-sign effect in a replication of an experiment. "p[subscript rep]" is now routinely reported in "Psychological Science" and has also begun to appear in…
Descriptors: Research Methodology, Guidelines, Probability, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Serlin, Ronald C. – Psychological Methods, 2010
The sense that replicability is an important aspect of empirical science led Killeen (2005a) to define "p[subscript rep]," the probability that a replication will result in an outcome in the same direction as that found in a current experiment. Since then, several authors have praised and criticized 'p[subscript rep]," culminating…
Descriptors: Epistemology, Effect Size, Replication (Evaluation), Measurement Techniques
Barnett, Kent; Mattox, John R., II – Journal of Asynchronous Learning Networks, 2010
When measuring outcomes in corporate training, the authors recommend that it is essential to introduce a comprehensive plan, especially when resources are limited and the company needs are vast. The authors hone in on five critical components for shaping a measurement plan to determine the success and ROI of training. The plan's components should…
Descriptors: Industrial Training, Evaluation Methods, Educational Strategies, Measurement Objectives
Peer reviewed Peer reviewed
Direct linkDirect link
Volkwein, J. Fredericks – New Directions for Institutional Research, 2010
In this chapter, the author proposes a model for assessing institutional effectiveness. The Volkwein model for assessing institutional effectiveness consists of five parts that summarize the steps for assessing institutions, programs, faculty, and students. The first step in the model distinguishes the dual purposes of institutional effectiveness:…
Descriptors: Institutional Evaluation, Models, Evaluation Methods, Evaluation Criteria
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6