NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 12 results Save | Export
Peer reviewed Peer reviewed
Gardner, Robert C.; Erdle, Stephen – Educational and Psychological Measurement, 1986
This article evaluated criticisms by Stevens and Aleamoni (1986) of an article by Gardner and Erdle (1984) on aggregation using either raw or standard scores. It was demonstrated that their criticisms were unfounded. (Author)
Descriptors: Correlation, Factor Analysis, Raw Scores, Scores
Peer reviewed Peer reviewed
Zimmerman, Donald W. – Journal of Experimental Education, 1986
A computer program randomly sampled ordered pairs of scores from known populations that departed from bivariate normal form and calculated correlation coefficients from sample values. Hypotheses were tested (1) that population correlations are zero using the t statistic; and (2) that population correlations have non-zero values using the r to z…
Descriptors: Correlation, Hypothesis Testing, Sampling, Statistical Distributions
Peer reviewed Peer reviewed
Stevens, Joseph J.; Aleamoni, Lawrence, M. – Educational and Psychological Measurement, 1986
Prior standardization of scores when an aggregate score is formed has been criticized. This article presents a demonstration of the effects of differential weighting of aggregate components that clarifies the need for prior standardization. The role of standardization in statistics and the use of aggregate scores in research are discussed.…
Descriptors: Correlation, Error of Measurement, Factor Analysis, Raw Scores
Peer reviewed Peer reviewed
Williams, Richard H.; Zimmerman, Donald W. – Journal of Experimental Education, 1982
The reliability of simple difference scores is greater than, less than, or equal to that of residualized difference scores, depending on whether the correlation between pretest and posttest scores is greater than, less than, or equal to the ratio of the standard deviations of pretest and posttest scores. (Author)
Descriptors: Achievement Gains, Comparative Analysis, Correlation, Pretests Posttests
Peer reviewed Peer reviewed
Harris, Deborah J.; Subkoviak, Michael J. – Educational and Psychological Measurement, 1986
This study examined three statistical methods for selecting items for mastery tests: (1) pretest-posttest; (2) latent trait; and (3) agreement statistics. The correlation between the latent trait method and agreement statistics, proposed here as an alternative, was substantial. Results for the pretest-posttest method confirmed its reputed…
Descriptors: Computer Simulation, Correlation, Item Analysis, Latent Trait Theory
Ackerman, Terry A. – 1987
One of the important underlying assumptions of all item response theory (IRT) models is that of local independence. This assumption requires that the response to an item on a test not be influenced by the response to any other items. This assumption is often taken for granted, with little or no scrutiny of the response process required to answer…
Descriptors: Computer Software, Correlation, Estimation (Mathematics), Latent Trait Theory
McKinley, Robert L.; Reckase, Mark D. – 1984
To assess the effects of correlated abilities on test characteristics, and to explore the effects of correlated abilities on the use of a multidimensional item response theory model which does not explicitly account for such a correlation, two tests were constructed. One had two relatively unidimensional subsets of items, the other had all…
Descriptors: Ability, Correlation, Factor Structure, Item Analysis
Peer reviewed Peer reviewed
Cattell, Raymond B.; Krug, Samuel E. – Educational and Psychological Measurement, 1986
Critics have occasionally asserted that the number of factors in the 16PF tests is too large. This study discusses factor-analytic methodology and reviews more than 50 studies in the field. It concludes that the number of important primaries encapsulated in the series is no fewer than the stated number. (Author/JAZ)
Descriptors: Correlation, Cross Cultural Studies, Factor Analysis, Maximum Likelihood Statistics
Becker, Betsy Jane – 1986
This paper discusses distribution theory and power computations for four common "tests of combined significance." These tests are calculated using one-sided sample probabilities or p values from independent studies (or hypothesis tests), and provide an overall significance level for the series of results. Noncentral asymptotic sampling…
Descriptors: Achievement Tests, Correlation, Effect Size, Hypothesis Testing
Peer reviewed Peer reviewed
Harrison, David A. – Journal of Educational Statistics, 1986
Multidimensional item response data were created. The strength of a general factor, the number of common factors, the distribution of items loadingon common factors, and the number of items in simulated tests were manipulated. LOGIST effectively recovered both item and trait parameters in nearly all of the experimental conditions. (Author/JAZ)
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Simulation, Correlation
Zwick, Rebecca – 1986
Although perfectly scalable items rarely occur in practice, Guttman's concept of a scale has proved to be valuable to the development of measurement theory. If the score distribution is uniform and there is an equal number of items at each difficulty level, both the elements and the eigenvalues of the Pearson correlation matrix of dichotomous…
Descriptors: Correlation, Difficulty Level, Item Analysis, Latent Trait Theory
Dickinson, Terry L. – 1985
The general linear model was described, and the influence that measurement errors have on model parameters was discussed. In particular, the assumptions of classical true-score theory were used to develop algebraic relationships between the squared multiple correlations coefficient and the regression coefficients in the infallible and fallible…
Descriptors: Analysis of Covariance, Analysis of Variance, Correlation, Error of Measurement