Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 11 |
Descriptor
Measurement Techniques | 65 |
Research Design | 65 |
Statistical Analysis | 65 |
Research Methodology | 29 |
Sampling | 20 |
Evaluation Methods | 14 |
Program Evaluation | 14 |
Research Problems | 14 |
Educational Research | 13 |
Data Collection | 11 |
Models | 11 |
More ▼ |
Source
Author
Publication Type
Education Level
Elementary Education | 2 |
Early Childhood Education | 1 |
Grade 1 | 1 |
Higher Education | 1 |
Primary Education | 1 |
Audience
Researchers | 6 |
Policymakers | 3 |
Practitioners | 1 |
Laws, Policies, & Programs
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Schochet, Peter Z. – Journal of Educational and Behavioral Statistics, 2022
This article develops new closed-form variance expressions for power analyses for commonly used difference-in-differences (DID) and comparative interrupted time series (CITS) panel data estimators. The main contribution is to incorporate variation in treatment timing into the analysis. The power formulas also account for other key design features…
Descriptors: Comparative Analysis, Statistical Analysis, Sample Size, Measurement Techniques
What Works Clearinghouse, 2018
Underlying all What Works Clearinghouse (WWC) products are WWC Study Review Guides, which are intended for use by WWC certified reviewers to assess studies against the WWC evidence standards. As part of an ongoing effort to increase transparency, promote collaboration, and encourage widespread use of the WWC standards, the Institute of Education…
Descriptors: Guides, Research Design, Research Methodology, Program Evaluation
What Works Clearinghouse, 2016
This document provides step-by-step instructions on how to complete the Study Review Guide (SRG, Version S3, V2) for single-case designs (SCDs). Reviewers will complete an SRG for every What Works Clearinghouse (WWC) review. A completed SRG should be a reviewer's independent assessment of the study, relative to the criteria specified in the review…
Descriptors: Guides, Research Design, Research Methodology, Program Evaluation
Solmeyer, Anna R.; Constance, Nicole – American Journal of Evaluation, 2015
Traditionally, evaluation has primarily tried to answer the question "Does a program, service, or policy work?" Recently, more attention is given to questions about variation in program effects and the mechanisms through which program effects occur. Addressing these kinds of questions requires moving beyond assessing average program…
Descriptors: Program Effectiveness, Program Evaluation, Program Content, Measurement Techniques
Murawska, Jaclyn M.; Walker, David A. – Mid-Western Educational Researcher, 2017
In this commentary, we offer a set of visual tools that can assist education researchers, especially those in the field of mathematics, in developing cohesiveness from a mixed methods perspective, commencing at a study's research questions and literature review, through its data collection and analysis, and finally to its results. This expounds…
Descriptors: Mixed Methods Research, Research Methodology, Visual Aids, Research Tools
Stapleton, Laura M.; Pituch, Keenan A.; Dion, Eric – Journal of Experimental Education, 2015
This article presents 3 standardized effect size measures to use when sharing results of an analysis of mediation of treatment effects for cluster-randomized trials. The authors discuss 3 examples of mediation analysis (upper-level mediation, cross-level mediation, and cross-level mediation with a contextual effect) with demonstration of the…
Descriptors: Effect Size, Measurement Techniques, Statistical Analysis, Research Design
Liu, Xiaofeng Steven – Evaluation Review, 2011
Covariate adjustment can increase the precision of estimates by removing unexplained variance from the error in randomized experiments, although chance covariate imbalance tends to counteract the improvement in precision. The author develops an easy measure to examine chance covariate imbalance in randomization by standardizing the average…
Descriptors: Measurement Techniques, Statistical Analysis, Experiments, Research Design
Taylor, Joseph; Kowalski, Susan; Wilson, Christopher; Getty, Stephen; Carlson, Janet – Journal of Research in Science Teaching, 2013
This paper focuses on the trade-offs that lie at the intersection of methodological requirements for causal effect studies and policies that affect how and to what extent schools engage in such studies. More specifically, current federal funding priorities encourage large-scale randomized studies of interventions in authentic settings. At the same…
Descriptors: Science Instruction, Research Methodology, Causal Models, Influences
Lin, Yi-Chun; Hsieh, Ya-Hui; Hou, Huei-Tse – Journal of Educational Computing Research, 2015
The development of a usability evaluation method for educational systems or applications, called the self-report-based sequential analysis, is described herein. The method aims to extend the current practice by proposing self-report-based sequential analysis as a new usability method, which integrates the advantages of self-report in survey…
Descriptors: Foreign Countries, Elementary School Students, Educational Technology, Sequential Approach
Haardoerfer, Regine – ProQuest LLC, 2010
Hierarchical Linear Modeling (HLM) sample size recommendations are mostly made with traditional group-design research in mind, as HLM as been used almost exclusively in group-design studies. Single-case research can benefit from utilizing hierarchical linear growth modeling, but sample size recommendations for growth modeling with HLM are scarce…
Descriptors: Sample Size, Monte Carlo Methods, Research Methodology, Research Design
Methodology in Our Education Research Culture: Toward a Stronger Collective Quantitative Proficiency
Henson, Robin K.; Hull, Darrell M.; Williams, Cynthia S. – Educational Researcher, 2010
How doctoral programs train future researchers in quantitative methods has important implications for the quality of scientifically based research in education. The purpose of this article, therefore, is to examine how quantitative methods are used in the literature and taught in doctoral programs. Evidence points to deficiencies in quantitative…
Descriptors: Doctoral Programs, Educational Research, Researchers, Research Design
Suskie, Linda A. – 1988
A guide to survey research is presented for both novice and experienced researchers. Steps of the survey research process are covered: (1) planning the survey to determine the purpose of the study, collecting background information, designing the sample, and making a time line for completing the project; (2) questionnaire design, including the…
Descriptors: Higher Education, Institutional Research, Measurement Techniques, Questionnaires

von Eye, Alexander; Schuster, Christof; Kreppner, Kurt – Journal of Adolescent Research, 2001
Discusses the effects of sampling scheme selection on the admissibility of log-linear models for multinomial and product multinomial sampling schemes for prospective and retrospective sampling. Notes that in multinomial sampling, marginal frequencies are not fixed, whereas for product multinomial sampling, uni- or multidimensional frequencies are…
Descriptors: Measurement Techniques, Models, Research Design, Research Methodology
SNOW, RICHARD E. – 1966
THE SELECTION AND USE OF RESPONSE VARIABLES IN EDUCATIONAL EXPERIMENTS USING MULTIVARIATE ANALYSIS WERE CONSIDERED AND ASSESSED. RESPONSE VARIABLES WERE DEEMED CENTRAL CONCERNS FOR EITHER THEORETICAL OR PRACTICALLY ORIENTED RESEARCH, AND THEIR COMPLEXITY WAS DEALT WITH UNDER THE HEADINGS OF APTITUDE INPUT MEASURES, REPEATED LEARNING MEASURES,…
Descriptors: Achievement Rating, Aptitude Tests, Educational Research, Learning Processes

Marks, Stephen E.; And Others – Journal of Counseling Psychology, 1973
The authors of this article contend that the Guinan and Foulds study was inadequately designed and executed, and the results indicate little of the "usefulness" of the test, much less illuminate the important hypothesis central to the investigation. Specific suggestions for further research in marathon group evaluation are made. (Author)
Descriptors: Evaluation Methods, Group Counseling, Measurement Techniques, Reliability