Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 5 |
Descriptor
Error of Measurement | 24 |
Hypothesis Testing | 24 |
Sampling | 24 |
Statistical Analysis | 11 |
Analysis of Variance | 10 |
Research Design | 8 |
Research Methodology | 6 |
Reliability | 5 |
Statistical Significance | 5 |
Correlation | 4 |
Power (Statistics) | 4 |
More ▼ |
Source
Author
Forsyth, Robert A. | 3 |
Levin, Joel R. | 3 |
Subkoviak, Michael J. | 3 |
Olejnik, Stephen F. | 2 |
Algina, James | 1 |
Bell, John F. | 1 |
Bonnett, Douglas G. | 1 |
Botella, Juan | 1 |
Calkins, Dick S. | 1 |
Clark, Sheldon B. | 1 |
Elmore, Patricia B. | 1 |
More ▼ |
Publication Type
Reports - Research | 10 |
Journal Articles | 8 |
Speeches/Meeting Papers | 5 |
Reports - Evaluative | 2 |
Numerical/Quantitative Data | 1 |
Reports - Descriptive | 1 |
Education Level
Elementary Secondary Education | 1 |
Audience
Researchers | 3 |
Location
Arizona | 1 |
California | 1 |
Missouri | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Cognitive Abilities Test | 1 |
What Works Clearinghouse Rating
Suero, Manuel; Privado, Jesús; Botella, Juan – Psicologica: International Journal of Methodology and Experimental Psychology, 2017
A simulation study is presented to evaluate and compare three methods to estimate the variance of the estimates of the parameters d and "C" of the signal detection theory (SDT). Several methods have been proposed to calculate the variance of their estimators, "d'" and "c." Those methods have been mostly assessed by…
Descriptors: Evaluation Methods, Theories, Simulation, Statistical Analysis
Solanas, Antonio; Manolov, Rumen; Sierra, Vicenta – Psicologica: International Journal of Methodology and Experimental Psychology, 2010
In the first part of the study, nine estimators of the first-order autoregressive parameter are reviewed and a new estimator is proposed. The relationships and discrepancies between the estimators are discussed in order to achieve a clear differentiation. In the second part of the study, the precision in the estimation of autocorrelation is…
Descriptors: Computation, Hypothesis Testing, Correlation, Monte Carlo Methods
Kim, Se-Kang – International Journal of Testing, 2010
The aim of the current study is to validate the invariance of major profile patterns derived from multidimensional scaling (MDS) by bootstrapping. Profile Analysis via Multidimensional Scaling (PAMS) was employed to obtain profiles and bootstrapping was used to construct the sampling distributions of the profile coordinates and the empirical…
Descriptors: Intervals, Multidimensional Scaling, Profiles, Evaluation
Olsen, Robert B.; Unlu, Fatih; Price, Cristofer; Jaciw, Andrew P. – National Center for Education Evaluation and Regional Assistance, 2011
This report examines the differences in impact estimates and standard errors that arise when these are derived using state achievement tests only (as pre-tests and post-tests), study-administered tests only, or some combination of state- and study-administered tests. State tests may yield different evaluation results relative to a test that is…
Descriptors: Achievement Tests, Standardized Tests, State Standards, Reading Achievement
Bonnett, Douglas G. – Psychological Methods, 2008
Most psychology journals now require authors to report a sample value of effect size along with hypothesis testing results. The sample effect size value can be misleading because it contains sampling error. Authors often incorrectly interpret the sample effect size as if it were the population effect size. A simple solution to this problem is to…
Descriptors: Intervals, Hypothesis Testing, Effect Size, Sampling

Subkoviak, Michael J.; Levin, Joel R. – Journal of Educational Measurement, 1977
Measurement error in dependent variables reduces the power of statistical tests to detect mean differences of specified magnitude. Procedures for determining power and sample size that consider the reliability of the dependent variable are discussed and illustrated. Methods for estimating reliability coefficients used in these procedures are…
Descriptors: Error of Measurement, Hypothesis Testing, Power (Statistics), Sampling

Forsyth, Robert A. – Applied Psychological Measurement, 1978
This note shows that, under conditions specified by Levin and Subkoviak (TM 503 420), it is not necessary to specify the reliabilities of observed scores when comparing completely randomized designs with randomized block designs. Certain errors in their illustrative example are also discussed. (Author/CTM)
Descriptors: Analysis of Variance, Error of Measurement, Hypothesis Testing, Reliability

Levin, Joel R.; Subkoviak, Michael J. – Applied Psychological Measurement, 1978
Comments (TM 503 706) on an earlier article (TM 503 420) concerning the comparison of the completely randomized design and the randomized block design are acknowledged and appreciated. In addition, potentially misleading notions arising from these comments are addressed and clarified. (See also TM 503 708). (Author/CTM)
Descriptors: Analysis of Variance, Error of Measurement, Hypothesis Testing, Reliability

Forsyth, Robert A. – Applied Psychological Measurement, 1978
This note continues the discussion of earlier articles (TM 503 420, TM 503 706, and TM 503 707), comparing the completely randomized design with the randomized block design. (CTM)
Descriptors: Analysis of Variance, Error of Measurement, Hypothesis Testing, Reliability

Forsyth, Robert A. – Educational and Psychological Measurement, 1971
Descriptors: Behavioral Science Research, Correlation, Error of Measurement, Hypothesis Testing
Lord, Frederic M. – 1971
A simple, rigorous, small-sample statistical technique is described for testing the hypothesis that two sets of measurements differ only because of errors of measurement and because of differing origins and units of measurement. (Author)
Descriptors: Error of Measurement, Hypothesis Testing, Mathematical Applications, Mathematics

Rasmussen, Jeffrey Lee – Evaluation Review, 1985
A recent study (Blair and Higgins, 1980) indicated a power advantage for the Wilcoxon W Test over student's t-test when calculated from a common mixed-normal sample. Results of the present study indicate that the t-test corrected for outliers shows a superior power curve to the Wilcoxon W.
Descriptors: Computer Simulation, Error of Measurement, Hypothesis Testing, Power (Statistics)

Bell, John F. – Journal of Educational Statistics, 1986
Khuri's and Satterthwaite's methods of obtaining confidence intervals of variance components are compared. The article discusses that Khuri's method may be applied to obtain confidence intervals for the variance components and other linear functions of the expected mean squares used in generalizability theory. (Author/JAZ)
Descriptors: Analysis of Variance, Elementary Education, Equations (Mathematics), Error of Measurement
Hough, Susan L.; Hall, Bruce W. – 1991
The meta-analytic techniques of G. V. Glass (1976) and J. E. Hunter and F. L. Schmidt (1977) were compared through their application to three meta-analytic studies from education literature. The following hypotheses were explored: (1) the overall mean effect size would be larger in a Hunter-Schmidt meta-analysis (HSMA) than in a Glass…
Descriptors: Comparative Analysis, Educational Research, Effect Size, Error of Measurement
Clark, Sheldon B.; Huck, Schuyler W. – 1983
In true experiments in which sample material can be randomly assigned to treatment conditions, most researchers presume that the condition of equal sample sizes is statistically desirable. When one or more a priori contrasts can be identified which represent a few overriding experimental concerns, however, allocating sample material unequally will…
Descriptors: Analysis of Variance, Error of Measurement, Hypothesis Testing, Power (Statistics)
Previous Page | Next Page »
Pages: 1 | 2