NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 106 to 120 of 123 results Save | Export
Peer reviewed Peer reviewed
Penfield, Douglas A. – Journal of Experimental Education, 1994
Type I error rate and power for the t test, Wilcoxon-Mann-Whitney test, van der Waerden Normal Scores, and Welch-Aspin-Satterthwaite (W) test are compared for two simulated independent random samples from nonnormal distributions. Conditions under which the t test and W test are best to use are discussed. (SLD)
Descriptors: Monte Carlo Methods, Nonparametric Statistics, Power (Statistics), Sample Size
Zwick, Rebecca – 1995
This paper describes a study, now in progress, of new methods for representing the sampling variability of Mantel-Haenszel differential item functioning (DIF) results, based on the system for categorizing the severity of DIF that is now in place at the Educational Testing Service. The methods, which involve a Bayesian elaboration of procedures…
Descriptors: Adaptive Testing, Bayesian Statistics, Classification, Computer Assisted Testing
PDF pending restoration PDF pending restoration
Bush, M. Joan; Schumacker, Randall E. – 1993
The feasibility of quick norms derived by the procedure described by B. D. Wright and M. H. Stone (1979) was investigated. Norming differences between traditionally calculated means and Rasch "quick" means were examined for simulated data sets of varying sample size, test length, and type of distribution. A 5 by 5 by 2 design with a…
Descriptors: Computer Simulation, Item Response Theory, Norm Referenced Tests, Sample Size
Lunneborg, Clifford E. – 1983
The wide availability of large amounts of inexpensive computing power has encouraged statisticians to explore many approaches to a basis for inference. This paper presents one such "computer-intensive" approach: the bootstrap of Bradley Efron. This methodology fits between the cases where it is assumed that the form of the distribution…
Descriptors: Analysis of Variance, Error of Measurement, Estimation (Mathematics), Hypothesis Testing
PDF pending restoration PDF pending restoration
Olejnik, Stephen F.; Algina, James – 1985
This paper examined the rank transformation approach to analysis of variance as a solution to the Behrens-Fisher problem. Using simulation methodology four parameters were manipulated for the two group design: (1) ratio of population variances; (2) distribution form; (3) sample size and (4) population mean difference. The results indicated that…
Descriptors: Analysis of Variance, Computer Simulation, Error of Measurement, Hypothesis Testing
Peer reviewed Peer reviewed
Broodbooks, Wendy J.; Elmore, Patricia B. – Educational and Psychological Measurement, 1987
The effects of sample size, number of variables, and population value of the congruence coefficient on the sampling distribution of the congruence coefficient were examined. Sample data were generated on the basis of the common factor model, and principal axes factor analyses were performed. (Author/LMO)
Descriptors: Factor Analysis, Mathematical Models, Monte Carlo Methods, Predictor Variables
Olejnik, Stephen; Algina, James – 1987
The purpose of this study was to develop a single procedure for comparing population variances which could be used for distribution forms. Bootstrap methodology was used to estimate the variability of the sample variance statistic when the population distribution was normal, platykurtic and leptokurtic. The data for the study were generated and…
Descriptors: Comparative Analysis, Estimation (Mathematics), Measurement Techniques, Monte Carlo Methods
Tryon, Warren W. – 1984
A normally distributed data set of 1,000 values--ranging from 50 to 150, with a mean of 50 and a standard deviation of 20--was created in order to evaluate the bootstrap method of repeated random sampling. Nine bootstrap samples of N=10 and nine more bootstrap samples of N=25 were randomly selected. One thousand random samples were selected from…
Descriptors: Computer Simulation, Estimation (Mathematics), Higher Education, Monte Carlo Methods
Peer reviewed Peer reviewed
Kiger, Jack E.; Wise, Kenneth – College and Research Libraries, 1993
Describes the of attribute sampling to estimate characteristics of library collections and operations. The nature of statistical sampling and making a statistical inference are covered, and examples from library situations are given. Tables of determination of sample size and evaluation of results are included. (Contains six references.) (EAM)
Descriptors: Expectancy Tables, Library Administration, Library Collections, Methods
Johnson, Colleen Cook – 1993
The purpose of this study is to help define the precise nature and limits of the tolerable range in which a researcher may be relatively confident about the statistical validity of his or her research findings, focusing specifically on the statistical validity of results when violating the assumptions associated with the one-way, fixed-effects…
Descriptors: Analysis of Covariance, Analysis of Variance, Comparative Analysis, Computer Simulation
Peer reviewed Peer reviewed
Albert, James H. – Journal of Educational Statistics, 1994
Analysis of a two-way sample of means is considered when corresponding population means are believed a priori to satisfy a partial order restriction. Simulation and the Gibbs sampler are used to summarize posterior distributions, and the posterior distribution is used to predict GPAs of first-year students at University of Iowa. (SLD)
Descriptors: Academic Achievement, Bayesian Statistics, College Entrance Examinations, College Freshmen
Becker, Betsy Jane – 1986
This paper discusses distribution theory and power computations for four common "tests of combined significance." These tests are calculated using one-sided sample probabilities or p values from independent studies (or hypothesis tests), and provide an overall significance level for the series of results. Noncentral asymptotic sampling…
Descriptors: Achievement Tests, Correlation, Effect Size, Hypothesis Testing
Rudner, Lawrence M.; Shafer, Mary Morello – 1992
Advances in computer technology are making it possible for educational researchers to use simpler statistical methods to address a wide range of questions with smaller data sets and fewer, and less restrictive, assumptions. This digest introduces computationally intensive statistics, collectively called resampling techniques. Resampling is a…
Descriptors: Computer Oriented Programs, Computer Uses in Education, Educational Research, Elementary Secondary Education
Lockwood, Robert E.; And Others – 1986
Standards, passing scores, or cut scores have been seen as an element of criterion-referenced tests since their introduction. This paper discusses at least two issues surrounding the establishment of cut scores which appear to need clarification: (1) the theoretical definition of a cut score; and (2) decisions which must be made in selecting a…
Descriptors: Criterion Referenced Tests, Cutting Scores, Error of Measurement, High Schools
Wingersky, Marilyn S.; Lord, Frederic M. – 1983
The sampling errors of maximum likelihood estimates of item-response theory parameters are studied in the case where both people and item parameters are estimated simultaneously. A check on the validity of the standard error formulas is carried out. The effect of varying sample size, test length, and the shape of the ability distribution is…
Descriptors: Error of Measurement, Estimation (Mathematics), Item Banks, Latent Trait Theory
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9