Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 8 |
Descriptor
Evaluation Methods | 29 |
Hypothesis Testing | 29 |
Research Design | 29 |
Research Methodology | 12 |
Educational Research | 10 |
Research Problems | 10 |
Validity | 8 |
Program Evaluation | 7 |
Statistical Analysis | 7 |
Data Analysis | 6 |
Mathematical Models | 6 |
More ▼ |
Source
Author
Algina, James | 1 |
Anderson, Judith I. | 1 |
Andres De Los Reyes | 1 |
Byrd, Jimmy K. | 1 |
Conquest, Loveday L. | 1 |
Cox, Pamela L. | 1 |
De Groot, Adriaan D. | 1 |
Devin M. Kearns | 1 |
Echternacht, Gary | 1 |
Elizabeth Talbott | 1 |
Estes, Gary D. | 1 |
More ▼ |
Publication Type
Journal Articles | 15 |
Reports - Research | 11 |
Reports - Evaluative | 6 |
Speeches/Meeting Papers | 3 |
Guides - Non-Classroom | 2 |
Opinion Papers | 2 |
ERIC Publications | 1 |
Reports - Descriptive | 1 |
Tests/Questionnaires | 1 |
Audience
Researchers | 3 |
Location
Connecticut | 1 |
Illinois | 1 |
Laws, Policies, & Programs
Elementary and Secondary… | 2 |
Assessments and Surveys
California Achievement Tests | 1 |
What Works Clearinghouse Rating
Elizabeth Talbott; Andres De Los Reyes; Devin M. Kearns; Jeannette Mancilla-Martinez; Mo Wang – Exceptional Children, 2023
Evidence-based assessment (EBA) requires that investigators employ scientific theories and research findings to guide decisions about what domains to measure, how and when to measure them, and how to make decisions and interpret results. To implement EBA, investigators need high-quality assessment tools along with evidence-based processes. We…
Descriptors: Evidence Based Practice, Evaluation Methods, Special Education, Educational Research
Gorin, Joanna S. – Teachers College Record, 2014
Background/Context: Principles of evidential reasoning have often been discussed in the context of educational and psychological measurement with respect to construct validity and validity arguments. More recently, Mislevy proposed the metaphor of assessment as an evidentiary argument about students' learning and abilities given their…
Descriptors: Educational Assessment, Educational Practices, Barriers, Evidence
Newman, Denis; Jaciw, Andrew P. – Empirical Education Inc., 2012
The motivation for this paper is the authors' recent work on several randomized control trials in which they found the primary result, which averaged across subgroups or sites, to be moderated by demographic or site characteristics. They are led to examine a distinction that the Institute of Education Sciences (IES) makes between "confirmatory"…
Descriptors: Educational Research, Research Methodology, Research Design, Classification
Killeen, Peter R. – Psychological Methods, 2010
Lecoutre, Lecoutre, and Poitevineau (2010) have provided sophisticated grounding for "p[subscript rep]." Computing it precisely appears, fortunately, no more difficult than doing so approximately. Their analysis will help move predictive inference into the mainstream. Iverson, Wagenmakers, and Lee (2010) have also validated…
Descriptors: Replication (Evaluation), Measurement Techniques, Research Design, Research Methodology
Lecoutre, Bruno; Lecoutre, Marie-Paule; Poitevineau, Jacques – Psychological Methods, 2010
P. R. Killeen's (2005a) probability of replication ("p[subscript rep]") of an experimental result is the fiducial Bayesian predictive probability of finding a same-sign effect in a replication of an experiment. "p[subscript rep]" is now routinely reported in "Psychological Science" and has also begun to appear in…
Descriptors: Research Methodology, Guidelines, Probability, Computation
Serlin, Ronald C. – Psychological Methods, 2010
The sense that replicability is an important aspect of empirical science led Killeen (2005a) to define "p[subscript rep]," the probability that a replication will result in an outcome in the same direction as that found in a current experiment. Since then, several authors have praised and criticized 'p[subscript rep]," culminating…
Descriptors: Epistemology, Effect Size, Replication (Evaluation), Measurement Techniques
Friedman, Barry A.; Cox, Pamela L.; Maher, Larry E. – Journal of Management Education, 2008
Group projects are an important component of higher education, and the use of peer assessment of students' individual contributions to group projects has increased. The researchers employed an expectancy theory approach and an experimental design in a field setting to investigate conditions that influence students' motivation to rate their peers'…
Descriptors: Research Design, Peer Evaluation, Student Motivation, Program Effectiveness
Harris, Chester – 1969
Three critical issues in the design and analysis of evaluation studies suggested at the conference are (1) the univariate versus multivariate dependent variable studies, (2) the choice of a response surface design over the conventional fixed model, and (3) the tendency to interpret every study as if it were being done for the first time. Taking…
Descriptors: Data Analysis, Evaluation Methods, Hypothesis Testing, Input Output Analysis
Byrd, Jimmy K. – Educational Administration Quarterly, 2007
Purpose: The purpose of this study was to review research published by Educational Administration Quarterly (EAQ) during the past 10 years to determine if confidence intervals and effect sizes were being reported as recommended by the American Psychological Association (APA) Publication Manual. Research Design: The author examined 49 volumes of…
Descriptors: Research Design, Intervals, Statistical Inference, Effect Size

Algina, James; Olejnik, Stephen F. – Evaluation Review, 1982
A method is presented for analyzing data collected in a multiple group time-series design. This consists of testing linear hypotheses about the experimental and control group-means. Both a multivariate and a univariate procedure are described. (Author/GK)
Descriptors: Control Groups, Data Analysis, Evaluation Methods, Experimental Groups

Price, Janet; Vincent, Pauline – Nursing Outlook, 1976
Descriptors: Evaluation Criteria, Evaluation Methods, Guidelines, Hypothesis Testing
Light, Judy A.; Lindvall, C. M. – 1974
The objective of this study was to adapt the ideas of "strong inference" in developing a design procedure which can be used in the evaluation of an instructional system in such a way as to identify and correct specific weaknesses within a system. This method allows the evaluator to consider many hypotheses as possible causes of system…
Descriptors: Educational Experiments, Evaluation, Evaluation Methods, Formative Evaluation

Powers, Stephen; And Others – Journal of Educational Measurement, 1983
The validity of the equipercentile hypothesis of the Title I Evaluation and Reporting System norm-referenced evaluation model was examined using 3,224 seventh- and ninth-grade students. Findings from confidence interval procedures contradicted the equipercentile hypothesis. There was a pattern of large gains for students not receiving any special…
Descriptors: Achievement Gains, Evaluation Methods, Evaluation Needs, Hypothesis Testing
Poynor, Hugh – 1976
The degree to which the chosen units of analysis are likely to produce spurious findings in staged combinations of multiple linear regression procedures are examined. The effects of grouping variables (e.g., classroom, school, and school district) on Procedures such as Coleman's semipartial regression and Mayeske's commonalities, in light of…
Descriptors: Analysis of Variance, Correlation, Data Analysis, Educational Research

Maher, W. A.; And Others – Environmental Monitoring and Assessment, 1994
Presents a general framework for designing sampling programs that ensure cost effectiveness, and managed errors kept within known and acceptable limits. (LZ)
Descriptors: Cost Effectiveness, Environmental Education, Environmental Research, Error of Measurement
Previous Page | Next Page ยป
Pages: 1 | 2