Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 4 |
Descriptor
Source
Educational and Psychological… | 38 |
Author
Publication Type
Journal Articles | 38 |
Reports - Evaluative | 18 |
Reports - Research | 18 |
Speeches/Meeting Papers | 4 |
Reports - Descriptive | 1 |
Education Level
Adult Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Kalinowski, Steven T. – Educational and Psychological Measurement, 2019
Item response theory (IRT) is a statistical paradigm for developing educational tests and assessing students. IRT, however, currently lacks an established graphical method for examining model fit for the three-parameter logistic model, the most flexible and popular IRT model in educational testing. A method is presented here to do this. The graph,…
Descriptors: Item Response Theory, Educational Assessment, Goodness of Fit, Probability
Shear, Benjamin R.; Zumbo, Bruno D. – Educational and Psychological Measurement, 2013
Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…
Descriptors: Error of Measurement, Multiple Regression Analysis, Data Analysis, Computer Simulation
Jin, Kuan-Yu; Wang, Wen-Chung – Educational and Psychological Measurement, 2014
Extreme response style (ERS) is a systematic tendency for a person to endorse extreme options (e.g., strongly disagree, strongly agree) on Likert-type or rating-scale items. In this study, we develop a new class of item response theory (IRT) models to account for ERS so that the target latent trait is free from the response style and the tendency…
Descriptors: Item Response Theory, Research Methodology, Bayesian Statistics, Response Style (Tests)
Gilpin, Andrew R. – Educational and Psychological Measurement, 2008
Rosenthal and Rubin introduced a general effect size index, r[subscript equivalent], for use in meta-analyses of two-group experiments; it employs p values from reports of the original studies to determine an equivalent t test and the corresponding point-biserial correlation coefficient. The present investigation used Monte Carlo-simulated…
Descriptors: Effect Size, Correlation, Meta Analysis, Monte Carlo Methods

Klockars, Alan J.; Hancock, Gregory R. – Educational and Psychological Measurement, 1994
Differences between per experiment (PE) and experimentwise (EW) error rates were studied through simulation for several multiple-comparison procedures for both pairwise comparisons and planned contrasts. Results suggest ways to control PE rates through new multiple-comparison procedures that maximize experimental power while controlling Type I…
Descriptors: Comparative Analysis, Computer Simulation, Research Methodology

Alexander, Ralph A.; And Others – Educational and Psychological Measurement, 1985
A comparison of measures of association for 2x2 data was carried out by computer analysis. For each of 1,539 tables, 14 measures of association were calculated and evaluated. A measure based on the odds-ratio (Chambers, 1982) was most accurate in capturing the rho underlying a majority of the tables. (Author/BW)
Descriptors: Computer Simulation, Correlation, Matrices, Research Methodology

Mossholder, Kevin W.; And Others – Educational and Psychological Measurement, 1990
A convention commonly used to describe interaction effects within moderated regression frameworks was examined through logical exposition and a Monte Carlo approach to simulate various moderator conditions. Results, which indicate that the convention may lead to incorrect inferences, are discussed in terms of interpreting moderator effects. (SLD)
Descriptors: Computer Simulation, Data Interpretation, Interaction, Monte Carlo Methods

Huitema, Bradley E.; McKean, Joseph W. – Educational and Psychological Measurement, 1994
Effectiveness of jackknife methods in reducing bias in estimation of the log-1 autocorrelation parameter p1 was evaluated through a Monte Carlo study using sample sizes ranging from 6 to 500. These estimates appear less biased in the small sample case than many that have been investigated recently. (SLD)
Descriptors: Computer Simulation, Estimation (Mathematics), Monte Carlo Methods, Sample Size

Fava, Joseph L.; Velicer, Wayne F. – Educational and Psychological Measurement, 1996
The consequences of underextracting factors and components within and between the methods of maximum likelihood factor analysis and principal components analysis were examined through computer simulation. The principal components score and the factor score estimate (T. W. Anderson and H. Rubin, 1956) tended to become different with…
Descriptors: Computer Simulation, Estimation (Mathematics), Factor Analysis, Factor Structure

McCarroll, David; And Others – Educational and Psychological Measurement, 1992
Monte Carlo simulations were used to examine three cases using analyses of variance (ANOVAs) sequentially. Simulation results show that Type I error rates increase when using ANOVAs in this sequential fashion, and the detrimental effect is greatest in situations in which researchers would most likely use ANOVAs sequentially. (SLD)
Descriptors: Analysis of Variance, Computer Simulation, Measurement Techniques, Monte Carlo Methods

Lathrop, Richard G.; Williams, Janice E. – Educational and Psychological Measurement, 1987
A Monte Carlo study, involving 6,000 "computer subjects" and three raters, explored the reliability of the inverse screen test for cluster analysis. Results indicate that the inverse screen may be a useful and reliable cluster analytic technique for determining the number of true groups. (TJH)
Descriptors: Cluster Analysis, Computer Simulation, Interrater Reliability, Monte Carlo Methods

Boehnke, Klaus – Educational and Psychological Measurement, 1984
The effects of some restraints not included in the classical assumptions of the F- and H-Test (e.g., correlation of mean and sample size) were examined in a simulation design. Also simulated was a situation in which two assumptions were not met simultaneously. (Author/BW)
Descriptors: Analysis of Variance, Computer Simulation, Hypothesis Testing, Research Methodology

Rasmussen, Jeffrey Lee; Dunlap, William P. – Educational and Psychological Measurement, 1991
Results of a Monte Carlo study with 4 populations (3,072 conditions) indicate that when distributions depart markedly from normality, nonparametric analysis and parametric analysis of transformed data show superior power to parametric analysis of raw data. Under conditions studied, parametric analysis of transformed data is more powerful than…
Descriptors: Comparative Analysis, Computer Simulation, Monte Carlo Methods, Power (Statistics)

Zimmerman, Donald W.; And Others – Educational and Psychological Measurement, 1993
Coefficient alpha was examined through computer simulation as an estimate of test reliability under violation of two assumptions. Coefficient alpha underestimated reliability under violation of the assumption of essential tau-equivalence of subtest scores and overestimated it under violation of the assumption of uncorrelated subtest error scores.…
Descriptors: Computer Simulation, Estimation (Mathematics), Mathematical Models, Robustness (Statistics)

Hanges, Paul J.; And Others – Educational and Psychological Measurement, 1991
Whether it is possible to develop a classification function that identifies the underlying range restriction from sample information alone was investigated in a simulation. Results indicate that such a function is possible. The procedure was found to be relatively accurate, robust, and powerful. (SLD)
Descriptors: Classification, Computer Simulation, Equations (Mathematics), Mathematical Models