NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 11 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Citkowicz, Martyna; Hedges, Larry V. – Society for Research on Educational Effectiveness, 2013
In some instances, intentionally or not, study designs are such that there is clustering in one group but not in the other. This paper describes methods for computing effect size estimates and their variances when there is clustering in only one group and the analysis has not taken that clustering into account. The authors provide the effect size…
Descriptors: Multivariate Analysis, Effect Size, Sampling, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Phillips, Gary W. – Applied Measurement in Education, 2015
This article proposes that sampling design effects have potentially huge unrecognized impacts on the results reported by large-scale district and state assessments in the United States. When design effects are unrecognized and unaccounted for they lead to underestimating the sampling error in item and test statistics. Underestimating the sampling…
Descriptors: State Programs, Sampling, Research Design, Error of Measurement
Kier, Frederick J. – 1997
It is a false, but common, belief that statistical significance testing evaluates result replicability. In truth, statistical significance testing reveals nothing about results replicability. Since science is based on replication of results, methods that assess replicability are important. This is particularly true when multivariate methods, which…
Descriptors: Evaluation Methods, Multivariate Analysis, Sampling, Statistical Significance
Peer reviewed Peer reviewed
Direct linkDirect link
Mallinckrodt, Brent; Abraham, W. Todd; Wei, Meifen; Russell, Daniel W. – Journal of Counseling Psychology, 2006
P. A. Frazier, A. P. Tix, and K. E. Barron (2004) highlighted a normal theory method popularized by R. M. Baron and D. A. Kenny (1986) for testing the statistical significance of indirect effects (i.e., mediator variables) in multiple regression contexts. However, simulation studies suggest that this method lacks statistical power relative to some…
Descriptors: Statistical Significance, Multiple Regression Analysis, Simulation, Evaluation Methods
Peer reviewed Peer reviewed
Thompson, Bruce – Educational and Psychological Measurement, 1995
Use of the bootstrap method in a canonical correlation analysis to evaluate the replicability of a study's results is illustrated. More confidence may be vested in research results that replicate. (SLD)
Descriptors: Analysis of Covariance, Correlation, Effect Size, Evaluation Methods
Peer reviewed Peer reviewed
Suen, Hoi K. – Topics in Early Childhood Special Education, 1992
This commentary on EC 603 695 argues that significance testing is a necessary but insufficient condition for positivistic research, that judgment-based assessment and single-subject research are not substitutes for significance testing, and that sampling fluctuation should be considered as one of numerous epistemological concerns in any…
Descriptors: Evaluation Methods, Evaluative Thinking, Research Design, Research Methodology
Thompson, Bruce – 1992
Three criticisms of overreliance on results from statistical significance tests are noted. It is suggested that: (1) statistical significance tests are often tautological; (2) some uses can involve comparisons that are not completely sensible; and (3) using statistical significance tests to evaluate both methodological assumptions (e.g., the…
Descriptors: Effect Size, Estimation (Mathematics), Evaluation Methods, Regression (Statistics)
Peer reviewed Peer reviewed
Da Prato, Robert A. – Topics in Early Childhood Special Education, 1992
This paper argues that judgment-based assessment of data from multiply replicated single-subject or small-N studies should replace normative-based (p=less than 0.05) assessment of large-N research in the clinical sciences, and asserts that inferential statistics should be abandoned as a method of evaluating clinical research data. (Author/JDD)
Descriptors: Evaluation Methods, Evaluative Thinking, Norms, Research Design
Peer reviewed Peer reviewed
Deal, James E.; Anderson, Edward R. – Journal of Marriage and the Family, 1995
Presentation of quantitative research on the family often suffers from a tendency to interpret findings on a statistical rather than substantive basis. Advocates the use of data analysis that lends itself to an intuitive understanding of the nature of the findings, the strength of the association, and the import of the result. (JPS)
Descriptors: Data Analysis, Effect Size, Evaluation Methods, Goodness of Fit
Shapiro, Jonathan – 1979
A statistical definition of information utilization for policy making decisions and an evaluation impact test to determine its occurrence are proposed. A univariate time series analysis is used to identify the internal trend for a given policy output variable and to control its effect. Two problems are identified in implementing an evaluation…
Descriptors: Decision Making, Evaluation Methods, Goodness of Fit, Information Utilization
Lefebvre, Daniel J.; Suen, Hoi K. – 1990
An empirical investigation of methodological issues associated with evaluating treatment effect in single-subject research (SSR) designs is presented. This investigation: (1) conducted a generalizability (G) study to identify the sources of systematic and random measurement error (SRME); (2) used an analytic approach based on G theory to integrate…
Descriptors: Classroom Observation Techniques, Disabilities, Educational Research, Error of Measurement