Publication Date
| In 2026 | 0 |
| Since 2025 | 59 |
| Since 2022 (last 5 years) | 416 |
| Since 2017 (last 10 years) | 919 |
| Since 2007 (last 20 years) | 1970 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
| Researchers | 93 |
| Practitioners | 23 |
| Teachers | 22 |
| Policymakers | 10 |
| Administrators | 5 |
| Students | 4 |
| Counselors | 2 |
| Parents | 2 |
| Community | 1 |
Location
| United States | 47 |
| Germany | 42 |
| Australia | 34 |
| Canada | 27 |
| Turkey | 27 |
| California | 22 |
| United Kingdom (England) | 20 |
| Netherlands | 18 |
| China | 17 |
| New York | 15 |
| United Kingdom | 15 |
| More ▼ | |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
| Does not meet standards | 1 |
Peer reviewedBarchard, Kimberly A.; Hakstian, A. Ralph – Multivariate Behavioral Research, 1997
Two studies, both using Type 12 sampling, are presented in which the effects of violating the assumption of essential parallelism in setting confidence intervals are studied. Results indicate that as long as data manifest properties of essential parallelism, the two methods studied maintain precise Type I error control. (SLD)
Descriptors: Error of Measurement, Robustness (Statistics), Sampling, Statistical Analysis
Peer reviewedCribbie, Robert A. – Journal of Experimental Education, 2003
Monte Carlo study results show that recently proposed multiple comparison procedures (MCPs) that are not intended to control the familywise error rate had consistently larger true model rates than did familywise error controlling MCPs. (SLD)
Descriptors: Comparative Analysis, Error of Measurement, Monte Carlo Methods
Peer reviewedFerron, John; Foster-Johnson, Lynn; Kromrey, Jeffrey D. – Journal of Experimental Education, 2003
Used Monte Carlo methods to examine the Type I error rates for randomization tests applied to single-case data arising from ABAB designs involving random, systematic, or response-guided assignment of interventions. Discusses conditions under which Type I error rate is controlled or is not. (SLD)
Descriptors: Error of Measurement, Monte Carlo Methods, Research Design
Peer reviewedHenson, Robin K.; Hwang, Dae-Yeop – Educational and Psychological Measurement, 2002
Conducted a reliability generalization study of Kolb's Learning Style Inventory (LSI; D. Kolb, 1976). Results for 34 studies indicate that internal consistency and test-retest reliabilities for LSI scores fluctuate considerably and contribute to deleterious cumulative measurement error. (SLD)
Descriptors: Error of Measurement, Generalization, Meta Analysis, Reliability
Peer reviewedHsiung, Tung-Hsing; Olejnik, Stephen – Journal of Experimental Education, 1996
Type I error rates and statistical power for the univariate F test and the James second-order test were estimated for the two-factor fixed-effects completely randomized design. Results reveal that the F test Type I error rate can exceed the nominal significance level when cell variances differ. (SLD)
Descriptors: Analysis of Variance, Error of Measurement, Power (Statistics)
Peer reviewedLee, Guemin; Fitzpatrick, Anne R. – Journal of Educational Measurement, 2003
Studied three procedures for estimating the standard errors of school passing rates using a generalizability theory model and considered the effects of student sample size. Results show that procedures differ in terms of assumptions about the populations from which students were sampled, and student sample size was found to have a large effect on…
Descriptors: Error of Measurement, Estimation (Mathematics), Generalizability Theory, Sampling
Peer reviewedSchweizer, Karl – Educational and Psychological Measurement, 1988
Reference-reliability relates variability due to change and error. It indicates whether some suspected change can be reliably differentiated from random fluctuations. A means by which the process of change can be measured at different points in time is outlined, using empirical data. (TJH)
Descriptors: Analysis of Variance, Change, Error of Measurement, Reliability
Peer reviewedReichardt, Charles S.; Gollob, Harry F. – Evaluation Review, 1989
The estimate-and-subtract method for eliminating threats to validity is described. It is argued that the method is superior to the use of no-difference findings for this purpose. Two ways of improving the no-difference findings are presented. (TJH)
Descriptors: Error of Measurement, Estimation (Mathematics), Statistical Significance, Validity
Peer reviewedPage, Brian R. – Physics Teacher, 1995
Presents a brief life history of William Sealy Gosset, the "Student" of Student's t-test. Reviews some basic statistics and describes Student's t-test of statistical hypothesis. Contains 11 references. (JRH)
Descriptors: Error of Measurement, Measurement, Physics, Statistical Analysis
Peer reviewedBrennan, Robert L.; Lee, Won-Chan – Educational and Psychological Measurement, 1999
Develops two procedures for estimating individual-level conditional standard errors of measurement for scale scores, assuming tests of dichotomously scored items. Compares the two procedures to a polynomial procedure and a procedure developed by L. Feldt and A. Qualls (1998) using data from the Iowa Tests of Basic Skills. Contains 22 references.…
Descriptors: Error of Measurement, Estimation (Mathematics), Scaling, Scores
Peer reviewedMazor, Kathleen M.; Hambleton, Ronald K.; Clauser, Brian E. – Applied Psychological Measurement, 1998
Studied whether matching on multiple test scores would reduce false-positive error rates compared to matching on a single number-correct score using simulation. False-positive error rates were reduced for most datasets. Findings suggest that assessing the dimensional structure of a test can be important in analysis of differential item functioning…
Descriptors: Error of Measurement, Item Bias, Scores, Test Items
Peer reviewedLee, Guemin – Applied Measurement in Education, 2000
Investigated incorporating a testlet definition into the estimation of the conditional standard error of measurement (SEM) for tests composed of testlets using five conditional SEM estimation methods. Results from 3,876 tests from the Iowa Tests of Basic Skills and 1,000 simulated responses show that item-based methods provide lower conditional…
Descriptors: Error of Measurement, Estimation (Mathematics), Simulation, Test Construction
Peer reviewedBonett, Douglas G.; Wright, Thomas A. – Psychometrika, 2000
Reviews interval estimates of the Pearson, Kendall tau-alpha, and Spearman correlates and proposes an improved standard error for the Spearman correlation. Examines the sample size required to yield a confidence interval having the desired width. Findings show accurate results from a two-stage approximation to the sample size. (SLD)
Descriptors: Correlation, Error of Measurement, Estimation (Mathematics), Sample Size
Peer reviewedOgasawara, Haruhiko – Applied Psychological Measurement, 2001
Derived asymptotic standard errors (SEs) of item response theory equating coefficient estimates using response functions or their transformations. Presents two variations of the item and test response function methods and SEs of their parameter estimates that use logit transformation of the item response functions. Numerical examples show that the…
Descriptors: Equated Scores, Error of Measurement, Item Response Theory
Peer reviewedMiller, Tamara B.; Kane, Michael – Applied Measurement in Education, 2001
Examined the precision of change scores in terms of error-tolerance (E/T) ratios for both relative and absolute interpretations of change scores. Used E/T ratios to evaluate the error in estimating the change relative to tolerance for error in a particular context. Illustrates the results with achievement test data. (SLD)
Descriptors: Achievement Tests, Error of Measurement, Estimation (Mathematics), Scores


