Publication Date
In 2025 | 1 |
Since 2024 | 5 |
Since 2021 (last 5 years) | 11 |
Since 2016 (last 10 years) | 24 |
Since 2006 (last 20 years) | 44 |
Descriptor
Causal Models | 58 |
Validity | 58 |
Inferences | 17 |
Research Methodology | 16 |
Research Design | 14 |
Statistical Analysis | 13 |
Evaluation Methods | 12 |
Comparative Analysis | 10 |
Reliability | 10 |
Educational Research | 9 |
Predictor Variables | 8 |
More ▼ |
Source
Author
Publication Type
Education Level
Higher Education | 5 |
Postsecondary Education | 4 |
Elementary Secondary Education | 1 |
Secondary Education | 1 |
Audience
Researchers | 7 |
Practitioners | 2 |
Teachers | 2 |
Students | 1 |
Location
Australia | 1 |
Canada | 1 |
Kenya | 1 |
Norway | 1 |
Qatar | 1 |
Rhode Island | 1 |
United Kingdom | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Maslach Burnout Inventory | 1 |
Peabody Picture Vocabulary… | 1 |
Wechsler Adult Intelligence… | 1 |
What Works Clearinghouse Rating
Kylie Anglin; Qing Liu; Vivian C. Wong – Asia Pacific Education Review, 2024
Given decision-makers often prioritize causal research that identifies the impact of treatments on the people they serve, a key question in education research is, "Does it work?". Today, however, researchers are paying increasing attention to successive questions that are equally important from a practical standpoint--not only does it…
Descriptors: Educational Research, Program Evaluation, Validity, Classification
David Rutkowski; Leslie Rutkowski; Greg Thompson; Yusuf Canbolat – Large-scale Assessments in Education, 2024
This paper scrutinizes the increasing trend of using international large-scale assessment (ILSA) data for causal inferences in educational research, arguing that such inferences are often tenuous. We explore the complexities of causality within ILSAs, highlighting the methodological constraints that challenge the validity of causal claims derived…
Descriptors: International Assessment, Data Use, Causal Models, Educational Research
Wendy Chan – Asia Pacific Education Review, 2024
As evidence from evaluation and experimental studies continue to influence decision and policymaking, applied researchers and practitioners require tools to derive valid and credible inferences. Over the past several decades, research in causal inference has progressed with the development and application of propensity scores. Since their…
Descriptors: Probability, Scores, Causal Models, Statistical Inference
Lund, Thorleif – Scandinavian Journal of Educational Research, 2021
The purpose of this paper is to propose a revision of the well-known Campbellian system for causal research. The revised system, termed the COPS model, applies to both applied and basic research. Five validities are included, where two validities are adopted from the Campbellian system, and the validities are partly hierarchically ordered.…
Descriptors: Research, Validity, Causal Models, Measurement
Garret J. Hall; Sophia Putzeys; Thomas R. Kratochwill; Joel R. Levin – Educational Psychology Review, 2024
Single-case experimental designs (SCEDs) have a long history in clinical and educational disciplines. One underdeveloped area in advancing SCED design and analysis is understanding the process of how internal validity threats and operational concerns are avoided or mitigated. Two strategies to ameliorate such issues in SCED involve replication and…
Descriptors: Research Design, Graphs, Case Studies, Validity
Ashley L. Watts; Ashley L. Greene; Wes Bonifay; Eiko L. Fried – Grantee Submission, 2023
The p-factor is a construct that is thought to explain and maybe even cause variation in all forms of psychopathology. Since its 'discovery' in 2012, hundreds of studies have been dedicated to the extraction and validation of statistical instantiations of the p-factor, called general factors of psychopathology. In this Perspective, we outline five…
Descriptors: Causal Models, Psychopathology, Goodness of Fit, Validity
Reichardt, Charles S. – American Journal of Evaluation, 2022
Evaluators are often called upon to assess the effects of programs. To assess a program effect, evaluators need a clear understanding of how a program effect is defined. Arguably, the most widely used definition of a program effect is the counterfactual one. According to the counterfactual definition, a program effect is the difference between…
Descriptors: Program Evaluation, Definitions, Causal Models, Evaluation Methods
Mahmoud M. S. Abdallah – Online Submission, 2025
This guide offers a comprehensive handbook to scientific research methodology and experimental design, specifically for novice MA and PhD researchers in Education and Language Learning (TESOL/TEFL). It establishes scientific research as a systematic, objective inquiry focused on identifying cause-and-effect relationships through empirical data.…
Descriptors: Scientific Research, Research Methodology, Research Design, Second Language Learning
Fansher, Madison; Adkins, Tyler J.; Shah, Priti – Grantee Submission, 2022
Media articles often communicate the latest scientific findings, and readers must evaluate the evidence and consider its potential implications. Prior work has found that the inclusion of graphs makes messages about scientific data more persuasive (Tal & Wansink, 2016). One explanation for this finding is that such visualizations evoke the…
Descriptors: Graphs, Correlation, Visual Aids, News Reporting
Kylie Anglin – Society for Research on Educational Effectiveness, 2022
Background: For decades, education researchers have relied on the work of Campbell, Cook, and Shadish to help guide their thinking about valid impact estimates in the social sciences (Campbell & Stanley, 1963; Shadish et al., 2002). The foundation of this work is the "validity typology" and its associated "threats to…
Descriptors: Artificial Intelligence, Educational Technology, Technology Uses in Education, Validity
Weidlich, Joshua; Gaševic, Dragan; Drachsler, Hendrik – Journal of Learning Analytics, 2022
As a research field geared toward understanding and improving learning, Learning Analytics (LA) must be able to provide empirical support for causal claims. However, as a highly applied field, tightly controlled randomized experiments are not always feasible nor desirable. Instead, researchers often rely on observational data, based on which they…
Descriptors: Causal Models, Inferences, Learning Analytics, Comparative Analysis
Kane, Mike – Measurement: Interdisciplinary Research and Perspectives, 2017
In the article "Rethinking Traditional Methods of Survey Validation" Andrew Maul describes a minimalist validation methodology for survey instruments, which he suggests is widely used in some areas of psychology and then critiques this methodology empirically and conceptually. He provides a reduction ad absurdum argument by showing that…
Descriptors: Surveys, Validity, Psychological Characteristics, Methods
Cadogan, John W.; Lee, Nick – Measurement: Interdisciplinary Research and Perspectives, 2016
In this commentary from Issue 14, n3, authors John Cadogan and Nick Lee applaud the paper by Aguirre-Urreta, Rönkkö, and Marakas "Measurement: Interdisciplinary Research and Perspectives", 14(3), 75-97 (2016), since their explanations and simulations work toward demystifying causal indicator models, which are often used by scholars…
Descriptors: Causal Models, Measurement, Validity, Statistical Analysis
Joyce, Kathryn E. – Educational Research and Evaluation, 2019
Within evidence-based education, results from randomised controlled trials (RCTs), and meta-analyses of them, are taken as reliable evidence for effectiveness -- they speak to "what works". Extending RCT results requires establishing that study samples and settings are representative of the intended target. Although widely recognised as…
Descriptors: Evidence Based Practice, Educational Research, Instructional Effectiveness, Randomized Controlled Trials
Ding, Peng; Dasgupta, Tirthankar – Grantee Submission, 2017
Fisher randomization tests for Neyman's null hypothesis of no average treatment effects are considered in a finite population setting associated with completely randomized experiments with more than two treatments. The consequences of using the F statistic to conduct such a test are examined both theoretically and computationally, and it is argued…
Descriptors: Statistical Analysis, Statistical Inference, Causal Models, Error Patterns