NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 12 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Berrío, Ángela I.; Herrera, Aura N.; Gómez-Benito, Juana – Journal of Experimental Education, 2019
This study examined the effect of sample size ratio and model misfit on the Type I error rates and power of the Difficulty Parameter Differences procedure using Winsteps. A unidimensional 30-item test with responses from 130,000 examinees was simulated and four independent variables were manipulated: sample size ratio (20/100/250/500/1000); model…
Descriptors: Sample Size, Test Bias, Goodness of Fit, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Park, Sunyoung; Beretvas, S. Natasha – Journal of Experimental Education, 2019
The log-odds ratio (ln[OR]) is commonly used to quantify treatments' effects on dichotomous outcomes and then pooled across studies using inverse-variance (1/v) weights. Calculation of the ln[OR]'s variance requires four cell frequencies for two groups crossed with values for dichotomous outcomes. While primary studies report the total sample size…
Descriptors: Sample Size, Meta Analysis, Statistical Analysis, Efficiency
Peer reviewed Peer reviewed
Direct linkDirect link
Schoeneberger, Jason A. – Journal of Experimental Education, 2016
The design of research studies utilizing binary multilevel models must necessarily incorporate knowledge of multiple factors, including estimation method, variance component size, or number of predictors, in addition to sample sizes. This Monte Carlo study examined the performance of random effect binary outcome multilevel models under varying…
Descriptors: Sample Size, Models, Computation, Predictor Variables
Peer reviewed Peer reviewed
Direct linkDirect link
Socha, Alan; DeMars, Christine E. – Educational and Psychological Measurement, 2013
Modeling multidimensional test data with a unidimensional model can result in serious statistical errors, such as bias in item parameter estimates. Many methods exist for assessing the dimensionality of a test. The current study focused on DIMTEST. Using simulated data, the effects of sample size splitting for use with the ATFIND procedure for…
Descriptors: Sample Size, Test Length, Correlation, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Ying; Rupp, Andre A. – Educational and Psychological Measurement, 2011
This study investigated the Type I error rate and power of the multivariate extension of the S - [chi][squared] statistic using unidimensional and multidimensional item response theory (UIRT and MIRT, respectively) models as well as full-information bifactor (FI-bifactor) models through simulation. Manipulated factors included test length, sample…
Descriptors: Test Length, Item Response Theory, Statistical Analysis, Error Patterns
Peer reviewed Peer reviewed
Direct linkDirect link
Luh, Wei-Ming; Guo, Jiin-Huarng – Journal of Experimental Education, 2009
The sample size determination is an important issue for planning research. However, limitations in size have seldom been discussed in the literature. Thus, how to allocate participants into different treatment groups to achieve the desired power is a practical issue that still needs to be addressed when one group size is fixed. The authors focused…
Descriptors: Sample Size, Research Methodology, Evaluation Methods, Simulation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yu, Lei; Moses, Tim; Puhan, Gautam; Dorans, Neil – ETS Research Report Series, 2008
All differential item functioning (DIF) methods require at least a moderate sample size for effective DIF detection. Samples that are less than 200 pose a challenge for DIF analysis. Smoothing can improve upon the estimation of the population distribution by preserving major features of an observed frequency distribution while eliminating the…
Descriptors: Test Bias, Item Response Theory, Sample Size, Evaluation Criteria
Peer reviewed Peer reviewed
Direct linkDirect link
Fidalgo, Angel M.; Ferreres, Doris; Muniz, Jose – Educational and Psychological Measurement, 2004
Sample-size restrictions limit the contingency table approaches based on asymptotic distributions, such as the Mantel-Haenszel (MH) procedure, for detecting differential item functioning (DIF) in many practical applications. Within this framework, the present study investigated the power and Type I error performance of empirical and inferential…
Descriptors: Test Bias, Evaluation Methods, Sample Size, Error Patterns
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zimmerman, Donald W. – Psicologica: International Journal of Methodology and Experimental Psychology, 2004
It is well known that the two-sample Student t test fails to maintain its significance level when the variances of treatment groups are unequal, and, at the same time, sample sizes are unequal. However, introductory textbooks in psychology and education often maintain that the test is robust to variance heterogeneity when sample sizes are equal.…
Descriptors: Sample Size, Nonparametric Statistics, Probability, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Wollack, James A. – Applied Measurement in Education, 2006
Many of the currently available statistical indexes to detect answer copying lack sufficient power at small [alpha] levels or when the amount of copying is relatively small. Furthermore, there is no one index that is uniformly best. Depending on the type or amount of copying, certain indexes are better than others. The purpose of this article was…
Descriptors: Statistical Analysis, Item Analysis, Test Length, Sample Size
PDF pending restoration PDF pending restoration
Barcikowski, Robert S. – 1973
In most behavioral science research very little attention is ever given to the probability of committing a Type II error, i.e., the probability of failing to reject a false null hypothesis. Recent publications by Cohen have led to insight on this topic for the fixed-effects analysis of variance and covariance. This paper provides social scientists…
Descriptors: Analysis of Covariance, Analysis of Variance, Behavioral Science Research, Error Patterns
Peer reviewed Peer reviewed
Direct linkDirect link
Lix, Lisa M.; Algina, James; Keselman, H. J. – Multivariate Behavioral Research, 2003
The approximate degrees of freedom Welch-James (WJ) and Brown-Forsythe (BF) procedures for testing within-subjects effects in multivariate groups by trials repeated measures designs were investigated under departures from covariance homogeneity and normality. Empirical Type I error and power rates were obtained for least-squares estimators and…
Descriptors: Interaction, Freedom, Sample Size, Multivariate Analysis