Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 4 |
Since 2016 (last 10 years) | 34 |
Since 2006 (last 20 years) | 113 |
Descriptor
Evaluation Methods | 195 |
Statistical Analysis | 195 |
Research Design | 115 |
Research Methodology | 67 |
Program Evaluation | 44 |
Foreign Countries | 35 |
Comparative Analysis | 32 |
Quasiexperimental Design | 32 |
Models | 31 |
Program Effectiveness | 27 |
Educational Research | 26 |
More ▼ |
Source
Author
Burstein, Leigh | 2 |
Delaney, Anne Marie | 2 |
Dorans, Neil J. | 2 |
Epstein, Dale | 2 |
Li, Weilin | 2 |
Lowe, Claire | 2 |
Onghena, Patrick | 2 |
Parker, Richard I. | 2 |
Reese, Debbie Denise | 2 |
Schochet, Peter Z. | 2 |
Steiner, Peter M. | 2 |
More ▼ |
Publication Type
Education Level
Location
Iran | 4 |
Australia | 3 |
Ohio | 3 |
Turkey | 3 |
California | 2 |
Florida | 2 |
Germany | 2 |
Israel | 2 |
Italy | 2 |
Pennsylvania (Philadelphia) | 2 |
Taiwan | 2 |
More ▼ |
Laws, Policies, & Programs
Elementary and Secondary… | 3 |
Civil Rights Act 1964 | 1 |
Indian Education Act 1972… | 1 |
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
National Assessment of… | 2 |
Foreign Language Classroom… | 1 |
Personal Orientation Inventory | 1 |
Test of Science Related… | 1 |
What Works Clearinghouse Rating
Lanqin Zheng; Zichen Huang; Yang Liu – Journal of Learning for Development, 2024
In recent years, the growing incidence of blended and online learning has highlighted instructional design concerns, especially STEM instructional design. Existing studies have often adopted observations, questionnaires, or interviews to evaluate STEM instructional design plans. However, there is still a lack of quantitative, measurable, and…
Descriptors: STEM Education, Preservice Teachers, Information Transfer, Statistical Analysis
Paul J. Dizona – ProQuest LLC, 2022
Missing data is a common challenge to any researcher in almost any field of research. In particular, human participants in research do not always respond or return for assessments leaving the researcher to rely on missing data methods. The most common methods (i.e., Multiple Imputation and Full Information Maximum Likelihood) assume that the…
Descriptors: Pretests Posttests, Research Design, Research Problems, Dropouts
Ozsoy, Seyma Nur; Kilmen, Sevilay – International Journal of Assessment Tools in Education, 2023
In this study, Kernel test equating methods were compared under NEAT and NEC designs. In NEAT design, Kernel post-stratification and chain equating methods taking into account optimal and large bandwidths were compared. In the NEC design, gender and/or computer/tablet use was considered as a covariate, and Kernel test equating methods were…
Descriptors: Equated Scores, Testing, Test Items, Statistical Analysis
Kassab, Omar; Mutz, Rüdiger; Daniel, Hans-Dieter – Research Evaluation, 2020
With the growing complexity of societal and scientific problems, research centers have emerged to facilitate the conduct of research beyond disciplinary and institutional boundaries. While they have become firmly established in the global university landscape, research centers raise some critical questions for research evaluation. Existing…
Descriptors: Statistical Analysis, Research and Development Centers, Evaluation Methods, Quasiexperimental Design
To, Jessica; Panadero, Ernesto; Carless, David – Assessment & Evaluation in Higher Education, 2022
The analysis of exemplars of different quality is a potentially powerful tool in enabling students to understand assessment expectations and appreciate academic standards. Through a systematic review methodology, this paper synthesises exemplar-based research designs, exemplar implementation and the educational effects of exemplars. The review of…
Descriptors: Research Design, Scoring Rubrics, Peer Evaluation, Self Evaluation (Individuals)
The Effect of Data Points per x- to y-Axis Ratio on Visual Analysts Evaluation of Single-Case Graphs
Radley, Keith C.; Dart, Evan H.; Wright, Sarah J. – School Psychology Quarterly, 2018
Research based on single-case designs (SCD) are frequently utilized in educational settings to evaluate the effect of an intervention on student behavior. Visual analysis is the primary method of evaluation of SCD, despite research noting concerns regarding reliability of the procedure. Recent research suggests that characteristics of the graphic…
Descriptors: Graphs, Evaluation Methods, Data, Intervention
Zimmerman, Kathleen N.; Pustejovsky, James E.; Ledford, Jennifer R.; Barton, Erin E.; Severini, Katherine E.; Lloyd, Blair P. – Grantee Submission, 2018
Varying methods for evaluating the outcomes of single case research designs (SCD) are currently used in reviews and meta-analyses of interventions. Quantitative effect size measures are often presented alongside visual analysis conclusions. Six measures across two classes--overlap measures (percentage non-overlapping data, improvement rate…
Descriptors: Research Design, Evaluation Methods, Synthesis, Intervention
Peralta, Yadira; Moreno, Mario; Harwell, Michael; Guzey, S. Selcen; Moore, Tamara J. – Educational Research Quarterly, 2018
Variance heterogeneity is a common feature of educational data when treatment differences expressed through means are present, and often reflects a treatment by subject interaction with respect to an outcome variable. Identifying variables that account for this interaction can enhance understanding of whom a treatment does and does not benefit in…
Descriptors: Educational Research, Hierarchical Linear Modeling, Engineering, Design
Girginov, Vassil – European Physical Education Review, 2016
The organisers of the 2012 London Olympics have endeavoured explicitly to use the Games to inspire a generation. This is nothing short of putting the main claim of Olympism to the test, but surprisingly the Inspire project has received virtually no scholarly scrutiny. Using an educationally-informed view of inspiration, this paper interrogates the…
Descriptors: Athletics, Evidence, Foreign Countries, Research Design
Guasch, Marc; Haro, Juan; Boada, Roger – Psicologica: International Journal of Methodology and Experimental Psychology, 2017
With the increasing refinement of language processing models and the new discoveries about which variables can modulate these processes, stimuli selection for experiments with a factorial design is becoming a tough task. Selecting sets of words that differ in one variable, while matching these same words into dozens of other confounding variables…
Descriptors: Factor Analysis, Language Processing, Design, Cluster Grouping
Avsec, Stanislav; Jamšek, Janez – International Journal of Technology and Design Education, 2018
Technological literacy defines a competitive vision for technology education. Working together with competitive supremacy, technological literacy shapes the actions of technology educators. Rationalised by the dictates of industry, technological literacy was constructed as a product of the marketplace. There are many models that visualise…
Descriptors: Technological Literacy, Secondary School Students, Technology Education, Evaluation Methods
Company, Pedro; Contero, Manuel; Otey, Jeffrey; Camba, Jorge D.; Agost, María-Jesús; Pérez-López, David – Educational Technology & Society, 2017
This paper describes the implementation and testing of our concept of adaptable rubrics, defined as analytical rubrics that arrange assessment criteria at multiple levels that can be expanded on demand. Because of its adaptable nature, these rubrics cannot be implemented in paper formats, neither are they supported by current Learning Management…
Descriptors: Scoring Rubrics, Case Studies, Evaluation Methods, Formative Evaluation
Cecile C. Dietrich; Eric J. Lichtenberger – Sage Research Methods Cases, 2016
We present a case study of the process through which a methodology was developed and applied to a quasi-experimental research study that employed propensity score matching. Methodological decisions are discussed and summarized, including an explanation of the approaches selected for each step in the study as well as rationales for these…
Descriptors: Test Construction, Quasiexperimental Design, Community Colleges, Fees
Westlund, Erik; Stuart, Elizabeth A. – American Journal of Evaluation, 2017
This article discusses the nonuse, misuse, and proper use of pilot studies in experimental evaluation research. The authors first show that there is little theoretical, practical, or empirical guidance available to researchers who seek to incorporate pilot studies into experimental evaluation research designs. The authors then discuss how pilot…
Descriptors: Use Studies, Pilot Projects, Evaluation Research, Experiments
Steiner, Peter M.; Wong, Vivian – Society for Research on Educational Effectiveness, 2016
Despite recent emphasis on the use of randomized control trials (RCTs) for evaluating education interventions, in most areas of education research, observational methods remain the dominant approach for assessing program effects. Over the last three decades, the within-study comparison (WSC) design has emerged as a method for evaluating the…
Descriptors: Randomized Controlled Trials, Comparative Analysis, Research Design, Evaluation Methods