NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Son, Sookyoung; Lee, Hyunjung; Jang, Yoona; Yang, Junyeong; Hong, Sehee – Educational and Psychological Measurement, 2019
The purpose of the present study is to compare nonnormal distributions (i.e., t, skew-normal, skew-t with equal skew and skew-t with unequal skew) in growth mixture models (GMMs) based on diverse conditions of a number of time points, sample sizes, and skewness for intercepts. To carry out this research, two simulation studies were conducted with…
Descriptors: Statistical Distributions, Statistical Analysis, Structural Equation Models, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Green, Samuel; Xu, Yuning; Thompson, Marilyn S. – Educational and Psychological Measurement, 2018
Parallel analysis (PA) assesses the number of factors in exploratory factor analysis. Traditionally PA compares the eigenvalues for a sample correlation matrix with the eigenvalues for correlation matrices for 100 comparison datasets generated such that the variables are independent, but this approach uses the wrong reference distribution. The…
Descriptors: Factor Analysis, Accuracy, Statistical Distributions, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Campitelli, Guillermo; Macbeth, Guillermo; Ospina, Raydonal; Marmolejo-Ramos, Fernando – Educational and Psychological Measurement, 2017
We present three strategies to replace the null hypothesis statistical significance testing approach in psychological research: (1) visual representation of cognitive processes and predictions, (2) visual representation of data distributions and choice of the appropriate distribution for analysis, and (3) model comparison. The three strategies…
Descriptors: Research Methodology, Hypothesis Testing, Psychology, Social Science Research
Peer reviewed Peer reviewed
Direct linkDirect link
Paek, Insu; Park, Hyun-Jeong; Cai, Li; Chi, Eunlim – Educational and Psychological Measurement, 2014
Typically a longitudinal growth modeling based on item response theory (IRT) requires repeated measures data from a single group with the same test design. If operational or item exposure problems are present, the same test may not be employed to collect data for longitudinal analyses and tests at multiple time points are constructed with unique…
Descriptors: Item Response Theory, Comparative Analysis, Test Items, Equated Scores
Peer reviewed Peer reviewed
Rasmussen, Jeffrey Lee; Dunlap, William P. – Educational and Psychological Measurement, 1991
Results of a Monte Carlo study with 4 populations (3,072 conditions) indicate that when distributions depart markedly from normality, nonparametric analysis and parametric analysis of transformed data show superior power to parametric analysis of raw data. Under conditions studied, parametric analysis of transformed data is more powerful than…
Descriptors: Comparative Analysis, Computer Simulation, Monte Carlo Methods, Power (Statistics)
Peer reviewed Peer reviewed
Cornwell, John M. – Educational and Psychological Measurement, 1993
A comparison is made of the power and actual alpha levels of three tests of homogeneity for independent product-moment correlation coefficients using Monte Carlo methods while selectively studying sample size and varying the number of correlation reliabilities. How robust these are in applied work is discussed. (SLD)
Descriptors: Comparative Analysis, Correlation, Error of Measurement, Monte Carlo Methods
Peer reviewed Peer reviewed
Parshall, Cynthia G.; Kromrey, Jeffrey D. – Educational and Psychological Measurement, 1996
Power and Type I error rates were estimated for contingency tables with small sample sizes for the following four types of tests: (1) Pearson's chi-square; (2) chi-square with Yates's continuity correction; (3) the likelihood ratio test; and (4) Fisher's Exact Test. Various marginal distributions, sample sizes, and effect sizes were examined. (SLD)
Descriptors: Chi Square, Comparative Analysis, Effect Size, Estimation (Mathematics)
Peer reviewed Peer reviewed
Hankins, Janette A. – Educational and Psychological Measurement, 1990
The effects of a fixed and variable entry procedure on bias and information of a Bayesian adaptive test were compared. Neither procedure produced biased ability estimates on the average. Bias at the distribution extremes, efficiency curves, item subsets generated for administration, and items required to reach termination are discussed. (TJH)
Descriptors: Adaptive Testing, Aptitude Tests, Bayesian Statistics, Comparative Analysis