Publication Date
| In 2026 | 0 |
| Since 2025 | 27 |
| Since 2022 (last 5 years) | 115 |
| Since 2017 (last 10 years) | 239 |
| Since 2007 (last 20 years) | 632 |
Descriptor
Source
Author
| Keselman, H. J. | 13 |
| Yuan, Ke-Hai | 10 |
| Algina, James | 9 |
| Zhang, Zhiyong | 8 |
| Bentler, Peter M. | 7 |
| Lix, Lisa M. | 7 |
| Tipton, Elizabeth | 6 |
| Wilcox, Rand R. | 6 |
| Gorard, Stephen | 5 |
| Qinyun Lin | 5 |
| Blankmeyer, Eric | 4 |
| More ▼ | |
Publication Type
Education Level
Location
| Australia | 17 |
| United Kingdom | 17 |
| United Kingdom (England) | 16 |
| Germany | 13 |
| California | 12 |
| United States | 11 |
| Canada | 10 |
| Netherlands | 10 |
| Italy | 9 |
| China | 8 |
| Florida | 7 |
| More ▼ | |
Laws, Policies, & Programs
| No Child Left Behind Act 2001 | 13 |
| Individuals with Disabilities… | 2 |
| Race to the Top | 2 |
| American Recovery and… | 1 |
| Deferred Action for Childhood… | 1 |
| Social Security Act | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
| Meets WWC Standards without Reservations | 1 |
| Meets WWC Standards with or without Reservations | 2 |
| Does not meet standards | 2 |
Vargha, Andras; Delaney, Harold D. – 2000
In this paper, six statistical tests of stochastic equality are compared with respect to Type I error and power through a Monte Carlo simulation. In the simulation, the skewness and kurtosis levels and the extent of variance heterogeneity of the two parent distributions were varied across a wide range. The sample sizes applied were either small or…
Descriptors: Comparative Analysis, Monte Carlo Methods, Robustness (Statistics), Sample Size
Dawadi, Bhaskar R. – 1999
The robustness of the polytomous Item Response Theory (IRT) model to violations of the unidimensionality assumption was studied. A secondary purpose was to provide guidelines to practitioners to help in deciding whether to use an IRT model to analyze their data. In a simulation study, the unidimensionality assumption was deliberately violated by…
Descriptors: Ability, Estimation (Mathematics), Factor Analysis, Item Response Theory
Peer reviewedKeselman, H. J.; Algina, James – Multivariate Behavioral Research, 1997
Examines the recommendations of H. Keselman, K. Carriere, and L. Lix (1993) regarding choice of sample size for obtaining robust tests of the repeated measures main and interaction hypotheses in a one Between-Subjects by one Within- Subjects design with a Welch-James type multivariate test when covariance matrices are heterogeneous. (SLD)
Descriptors: Analysis of Covariance, Interaction, Multivariate Analysis, Research Design
Peer reviewedBraucht, G. Nicholas; Reichardt, Charles S. – Evaluation Review, 1993
Procedures for implementing random assignment with trickle processing and ways they can be corrupted are described. A computerized method for implementing random assignment with trickle processing is presented as a desirable alternative in many situations and a way of protecting against threats to assignment validity. (SLD)
Descriptors: Computer Oriented Programs, Experimental Groups, Research Methodology, Research Reports
Peer reviewedTan, E. S.; Ambergen, A. W.; Does, R. J. M. M.; Imbos, Tj. – Journal of Educational and Behavioral Statistics, 1999
Studied the one-parameter Item Response Theory model with normal item-characteristic curves in a longitudinal context. Results of a simulation suggest that the proposed procedure is rather robust against departures from normality. However, the estimation of the correlations between regression parameters can be seriously biased. (SLD)
Descriptors: Change, Estimation (Mathematics), Item Response Theory, Longitudinal Studies
Peer reviewedKirisci, Levent; Hsu, Tse-chi; Yu, Lifa – Applied Psychological Measurement, 2001
Studied the effects of test dimensionality, theta distribution shape, and estimation program (BILOG, MULTILOG, or XCALIBRE) on the accuracy of item and person parameter estimates through simulation. Derived guidelines for estimating parameters of multidimensional test items using unidimensional item response theory models. (SLD)
Descriptors: Ability, Computer Software, Estimation (Mathematics), Item Response Theory
Algina, James; Keselman, H. J.; Penfield, Randall D. – Educational and Psychological Measurement, 2006
Kelley compared three methods for setting a confidence interval (CI) around Cohen's standardized mean difference statistic: the noncentral-"t"-based, percentile (PERC) bootstrap, and biased-corrected and accelerated (BCA) bootstrap methods under three conditions of nonnormality, eight cases of sample size, and six cases of population…
Descriptors: Effect Size, Comparative Analysis, Sample Size, Investigations
Cumming, Geoff; Maillardet, Robert – Psychological Methods, 2006
Confidence intervals (CIs) give information about replication, but many researchers have misconceptions about this information. One problem is that the percentage of future replication means captured by a particular CI varies markedly, depending on where in relation to the population mean that CI falls. The authors investigated the distribution of…
Descriptors: Intervals, Misconceptions, Mathematical Concepts, Researchers
Drennan, Jonathan; Hyde, Abbey – Assessment & Evaluation in Higher Education, 2008
Traditionally the measures used to evaluate the impact of an educational programme on student outcomes and the extent to which students change is a comparison of the student's pre-test scores with his/her post-test scores. However, this method of evaluating change may be problematic due to the confounding factor of response shift bias when student…
Descriptors: Pretests Posttests, Test Construction, Response Style (Tests), Robustness (Statistics)
Poremba, Kelli D.; Rowell, R. Kevin – 1997
Although an analysis of covariance (ANCOVA) allows for the removal of an uncontrolled source of variation that is represented by the covariates, this "correction," which occurs with the dependent variable scores is unfortunately seen by some as a blanket adjustment device that can be used with an inadequate amount of consideration for…
Descriptors: Analysis of Covariance, Analysis of Variance, Heuristics, Regression (Statistics)
Blankmeyer, Eric – 1996
A high-breakdown estimator is a robust statistic that can withstand a large amount of contaminated data. In linear regression, high-breakdown estimators can detect outliers and distinguish between good and bad leverage points. This paper summarizes the case for high-breakdown regression and emphasizes the least quartile difference estimator (LQD)…
Descriptors: Computer Software, Estimation (Mathematics), Least Squares Statistics, Regression (Statistics)
Micceri, Theodore – 1990
This paper reports an attempt to identify appropriate and robust location estimators for situations that tend to occur among various types of empirical data. Emphasizing robustness across broad unidentifiable ranges of contamination, an attempt was made to replicate, on a somewhat smaller scale, the definitive Princeton Robustness Study of 1972 to…
Descriptors: Educational Research, Equations (Mathematics), Estimation (Mathematics), Mathematical Models
Peer reviewedMerenda, Peter F. – Measurement and Evaluation in Counseling and Development, 1997
Offers suggestions for proper procedures for authors to use--and some pitfalls to avoid--when writing studies using factor analysis methods. Discusses distinctions among different methods of analysis, the adequacy of factor structure, and other notes of caution. Encourages authors to ensure that their research is statistically sound. (RJM)
Descriptors: Data Interpretation, Factor Analysis, Factor Structure, Reliability
Peer reviewedRousseau, Ronald – Journal of the American Society for Information Science, 1992
Examines the robustness property of Lotka's law for scholarly papers with more than one author. Adjusted counts for assigning credit to authors proportionally are explained, and two bibliographies are analyzed using frequency distributions that show where the robustness property breaks down. (nine references) (LRW)
Descriptors: Authors, Bibliographies, Bibliometrics, Ratios (Mathematics)
Peer reviewedCohen, Jacob; Nee, John C. M. – Multivariate Behavioral Research, 1990
The analysis of contingency tables via set correlation allows the assessment of subhypotheses involving contrast functions of the categories of the nominal scales. The robustness of such methods with regard to Type I error and statistical power was studied via a Monte Carlo experiment. (TJH)
Descriptors: Computer Simulation, Monte Carlo Methods, Multivariate Analysis, Power (Statistics)

Direct link
