NotesFAQContact Us
Collection
Advanced
Search Tips
Education Level
Audience
Researchers2
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 24 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Lingbo Tong; Wen Qu; Zhiyong Zhang – Grantee Submission, 2025
Factor analysis is widely utilized to identify latent factors underlying the observed variables. This paper presents a comprehensive comparative study of two widely used methods for determining the optimal number of factors in factor analysis, the K1 rule, and parallel analysis, along with a more recently developed method, the bass-ackward method.…
Descriptors: Factor Analysis, Monte Carlo Methods, Statistical Analysis, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Hout, Michael C.; Goldinger, Stephen D.; Ferguson, Ryan W. – Journal of Experimental Psychology: General, 2013
Although traditional methods to collect similarity data (for multidimensional scaling [MDS]) are robust, they share a key shortcoming. Specifically, the possible pairwise comparisons in any set of objects grow rapidly as a function of set size. This leads to lengthy experimental protocols, or procedures that involve scaling stimulus subsets. We…
Descriptors: Visual Stimuli, Research Methodology, Problem Solving, Multidimensional Scaling
Dong, Nianbo – Society for Research on Educational Effectiveness, 2011
The purpose of this study is through Monte Carlo simulation to compare several propensity score methods in approximating factorial experimental design and identify best approaches in reducing bias and mean square error of parameter estimates of the main and interaction effects of two factors. Previous studies focused more on unbiased estimates of…
Descriptors: Research Design, Probability, Monte Carlo Methods, Simulation
Swaminathan, Hariharan; Horner, Robert H.; Rogers, H. Jane; Sugai, George – Society for Research on Educational Effectiveness, 2012
This study is aimed at addressing the criticisms that have been leveled at the currently available statistical procedures for analyzing single subject designs (SSD). One of the vexing problems in the analysis of SSD is in the assessment of the effect of intervention. Serial dependence notwithstanding, the linear model approach that has been…
Descriptors: Evidence, Effect Size, Research Methodology, Intervention
Peer reviewed Peer reviewed
Direct linkDirect link
Young, Michael E.; Clark, M. H.; Goffus, Andrea; Hoane, Michael R. – Learning and Motivation, 2009
Morris water maze data are most commonly analyzed using repeated measures analysis of variance in which daily test sessions are analyzed as an unordered categorical variable. This approach, however, may lack power, relies heavily on post hoc tests of daily performance that can complicate interpretation, and does not target the nonlinear trends…
Descriptors: Monte Carlo Methods, Regression (Statistics), Research Methodology, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Myung, Jay I.; Pitt, Mark A. – Psychological Review, 2009
Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it…
Descriptors: Research Design, Cognitive Psychology, Information Retrieval, Classification
Barnette, J. Jackson; McLean, James E. – 1999
Four of the most commonly used multiple comparison procedures were compared for pairwise comparisons and relative to control of per-experiment and experimentwise Type I errors when conducted as protected or unprotected tests. The methods are: (1) Dunn-Bonferroni; (2) Dunn-Sidak; (3) Holm's sequentially rejective; and (4) Tukey's honestly…
Descriptors: Comparative Analysis, Monte Carlo Methods, Research Methodology, Selection
Barnette, J. Jackson; McLean, James E. – 1998
Conventional wisdom suggests the omnibus F-test needs to be significant before conducting post-hoc pairwise multiple comparisons. However, there is little empirical evidence supporting this practice. Protected tests are conducted only after a significant omnibus F-test while unprotected tests are conducted without regard to the significance of the…
Descriptors: Comparative Analysis, Monte Carlo Methods, Research Methodology, Sample Size
Peer reviewed Peer reviewed
Isham, Steven P.; Donoghue, John R. – Applied Psychological Measurement, 1998
Used Monte Carlo methods to compare several measures of item-parameter drift, manipulating numbers of examinees and items and numbers of drift items. Overall, Lord's chi square (F. Lord, 1968) measure was the most effective in identifying items that exhibited drift. Discusses the usefulness of other methods. (SLD)
Descriptors: Chi Square, Comparative Analysis, Monte Carlo Methods, Research Methodology
Donoghue, John R. – 1995
A Monte Carlo study compared the usefulness of six variable weighting methods for cluster analysis. Data were 100 bivariate observations from 2 subgroups, generated according to a finite normal mixture model. Subgroup size, within-group correlation, within-group variance, and distance between subgroup centroids were manipulated. Of the clustering…
Descriptors: Algorithms, Cluster Analysis, Comparative Analysis, Correlation
Klockars, Alan J.; Hancock, Gregory R. – 1990
Two strategies, derived from J. P. Schaffer (1986), were compared as tests of significance for a complete set of planned orthogonal contrasts. The procedures both maintain an experimentwise error rate at or below alpha, but differ in the manner in which they test the contrast with the largest observed difference. One approach proceeds directly to…
Descriptors: Comparative Analysis, Hypothesis Testing, Monte Carlo Methods, Research Methodology
Barnette, J. Jackson; McLean, James E. – 1998
Tukey's Honestly Significant Difference (HSD) procedure (J. Tukey, 1953) is probably the most recommended and used procedure for controlling Type I error rate when making multiple pairwise comparisons as follow-ups to a significant omnibus F test. This study compared observed Type I errors with nominal alphas of 0.01, 0.05, and 0.10 compared for…
Descriptors: Comparative Analysis, Error of Measurement, Monte Carlo Methods, Research Methodology
Peer reviewed Peer reviewed
Quintana, Stephen M.; Maxwell, Scott E. – Journal of Educational Statistics, 1994
Seven univariate procedures for testing omnibus null hypotheses for data gathered from repeated measures designs were evaluated, comparing five alternative approaches with two more traditional procedures. Results suggest that the alternatives are improvements. The most effective alternate procedure in controlling Type I error rates is discussed.…
Descriptors: Comparative Analysis, Hypothesis Testing, Monte Carlo Methods, Research Methodology
Peer reviewed Peer reviewed
Wilson, Gale A.; Martin, Samuel A. – Educational and Psychological Measurement, 1983
Either Bartlett's chi-square test of sphericity or Steiger's chi-square test can be used to test the significance of a correlation matrix to determine the appropriateness of factor analysis. They were evaluated using computer-generated correlation matrices. Steiger's test is recommended due to its increased power and computational simplicity.…
Descriptors: Comparative Analysis, Correlation, Factor Analysis, Hypothesis Testing
Barcikowski, Robert S.; Elliott, Ronald S. – 1996
A large number of pairwise multiple comparisons (P-MCPs) have been introduced recently to the educational research community. The use of these P-MCPs with single group repeated measures data was studied through an exploratory Monte Carlo study of P-MCPs that have been shown to control different types of Type 2 error and Type 1 familywise error…
Descriptors: Comparative Analysis, Educational Research, Monte Carlo Methods, Power (Statistics)
Previous Page | Next Page ยป
Pages: 1  |  2