NotesFAQContact Us
Collection
Advanced
Search Tips
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 76 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Huey T. Chen; Liliana Morosanu; Victor H. Chen – Asia Pacific Journal of Education, 2024
The Campbellian validity typology has been used as a foundation for outcome evaluation and for developing evidence-based interventions for decades. As such, randomized control trials were preferred for outcome evaluation. However, some evaluators disagree with the validity typology's argument that randomized controlled trials as the best design…
Descriptors: Evaluation Methods, Systems Approach, Intervention, Evidence Based Practice
Elizabeth Talbott; Andres De Los Reyes; Devin M. Kearns; Jeannette Mancilla-Martinez; Mo Wang – Exceptional Children, 2023
Evidence-based assessment (EBA) requires that investigators employ scientific theories and research findings to guide decisions about what domains to measure, how and when to measure them, and how to make decisions and interpret results. To implement EBA, investigators need high-quality assessment tools along with evidence-based processes. We…
Descriptors: Evidence Based Practice, Evaluation Methods, Special Education, Educational Research
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Weidlich, Joshua; Gaševic, Dragan; Drachsler, Hendrik – Journal of Learning Analytics, 2022
As a research field geared toward understanding and improving learning, Learning Analytics (LA) must be able to provide empirical support for causal claims. However, as a highly applied field, tightly controlled randomized experiments are not always feasible nor desirable. Instead, researchers often rely on observational data, based on which they…
Descriptors: Causal Models, Inferences, Learning Analytics, Comparative Analysis
Zimmerman, Kathleen N.; Ledford, Jennifer R.; Severini, Katherine E.; Pustejovsky, James E.; Barton, Erin E.; Lloyd, Blair P. – Grantee Submission, 2018
Tools for evaluating the quality and rigor of single case research designs (SCD) are often used when conducting SCD syntheses. Preferred components include evaluations of design features related to the internal validity of SCD to obtain quality and/or rigor ratings. Three tools for evaluating the quality and rigor of SCD (Council for Exceptional…
Descriptors: Research Design, Evaluation Methods, Synthesis, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Wing, Coady; Bello-Gomez, Ricardo A. – American Journal of Evaluation, 2018
Treatment effect estimates from a "regression discontinuity design" (RDD) have high internal validity. However, the arguments that support the design apply to a subpopulation that is narrower and usually different from the population of substantive interest in evaluation research. The disconnect between RDD population and the…
Descriptors: Regression (Statistics), Research Design, Validity, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Hitchcock, John H.; Johanson, George A. – Research in the Schools, 2015
Understanding the reason(s) for Differential Item Functioning (DIF) in the context of measurement is difficult. Although identifying potential DIF items is typically a statistical endeavor, understanding the reasons for DIF (and item repair or replacement) might require investigations that can be informed by qualitative work. Such work is…
Descriptors: Mixed Methods Research, Test Items, Item Analysis, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
St.Clair, Travis; Cook, Thomas D.; Hallberg, Kelly – American Journal of Evaluation, 2014
Although evaluators often use an interrupted time series (ITS) design to test hypotheses about program effects, there are few empirical tests of the design's validity. We take a randomized experiment on an educational topic and compare its effects to those from a comparative ITS (CITS) design that uses the same treatment group as the experiment…
Descriptors: Time, Evaluation Methods, Measurement Techniques, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Robertson, Clare; Ramsay, Craig; Gurung, Tara; Mowatt, Graham; Pickard, Robert; Sharma, Pawana – Research Synthesis Methods, 2014
We describe our experience of using a modified version of the Cochrane risk of bias (RoB) tool for randomised and non-randomised comparative studies. Objectives: (1) To assess time to complete RoB assessment; (2) To assess inter-rater agreement; and (3) To explore the association between RoB and treatment effect size. Methods: Cochrane risk of…
Descriptors: Risk, Randomized Controlled Trials, Research Design, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Marcus, Sue M.; Stuart, Elizabeth A.; Wang, Pei; Shadish, William R.; Steiner, Peter M. – Psychological Methods, 2012
Although randomized studies have high internal validity, generalizability of the estimated causal effect from randomized clinical trials to real-world clinical or educational practice may be limited. We consider the implication of randomized assignment to treatment, as compared with choice of preferred treatment as it occurs in real-world…
Descriptors: Educational Practices, Program Effectiveness, Validity, Causal Models
Peer reviewed Peer reviewed
Direct linkDirect link
Skinner, Christopher H.; McCleary, Daniel F.; Skolits, Gary L.; Poncy, Brian C.; Cates, Gary L. – Psychology in the Schools, 2013
The success of Response-to-Intervention (RTI) and similar models of service delivery is dependent on educators being able to apply effective and efficient remedial procedures. In the process of implementing problem-solving RTI models, school psychologists have an opportunity to contribute to and enhance the quality of our remedial-procedure…
Descriptors: Response to Intervention, Models, Problem Solving, School Psychologists
Minelli, Rachel M. – ProQuest LLC, 2012
This dissertation reports the results of three studies and a pilot study. The first study was a Monte Carlo validation study that examined the accuracy of a new visual inspection method, the semi-interquartile range method. Results of the study indicated that this method had lower levels of power than a previously validated method, the…
Descriptors: Educational Assessment, Student Evaluation, Evaluation Methods, Preservice Teachers
Peer reviewed Peer reviewed
Direct linkDirect link
Kratochwill, Thomas R.; Levin, Joel R. – Psychological Methods, 2010
In recent years, single-case designs have increasingly been used to establish an empirical basis for evidence-based interventions and techniques in a variety of disciplines, including psychology and education. Although traditional single-case designs have typically not met the criteria for a randomized controlled trial relative to conventional…
Descriptors: Research Design, Intervention, Evidence, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Snell, Joel C.; Marsh, Mitchell – Journal of Instructional Psychology, 2011
The authors have over the years tried to revise meta-analysis because it's basic premise is to add apples and oranges together and analyze. In other words, various data on the same subject are chosen using different samples, research strategies, and number properties. The findings are then homogenized and a statistical analysis is used (Snell, J.…
Descriptors: Research Methodology, Statistical Analysis, Teacher Attitudes, Meta Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Nyhus, Erika; Barcelo, Francisco – Brain and Cognition, 2009
For over four decades the Wisconsin Card Sorting Test (WCST) has been one of the most distinctive tests of prefrontal function. Clinical research and recent brain imaging have brought into question the validity and specificity of this test as a marker of frontal dysfunction. Clinical studies with neurological patients have confirmed that, in its…
Descriptors: Research Design, Construct Validity, Validity, Neurology
Peer reviewed Peer reviewed
Direct linkDirect link
Killeen, Peter R. – Psychological Methods, 2010
Lecoutre, Lecoutre, and Poitevineau (2010) have provided sophisticated grounding for "p[subscript rep]." Computing it precisely appears, fortunately, no more difficult than doing so approximately. Their analysis will help move predictive inference into the mainstream. Iverson, Wagenmakers, and Lee (2010) have also validated…
Descriptors: Replication (Evaluation), Measurement Techniques, Research Design, Research Methodology
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6