NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)0
Since 2006 (last 20 years)1
Audience
Location
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 17 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Plucker, Jonathan A.; Qian, Meihua; Schmalensee, Stephanie L. – Creativity Research Journal, 2014
In recent years, the social sciences have seen a resurgence in the study of divergent thinking (DT) measures. However, many of these recent advances have focused on abstract, decontextualized DT tasks (e.g., list as many things as you can think of that have wheels). This study provides a new perspective by exploring the reliability and validity…
Descriptors: Creative Thinking, Creativity Tests, Scoring Formulas, Evaluation Methods
Peer reviewed Peer reviewed
Campbell, Brian; Wilson, Bradley J. – Journal of School Psychology, 1986
Investigated Kaufman's procedures for determining intersubtest scatter on the Wechsler Intelligence Scale for Children-Revised by means of Sattler's revised tables for determining significant subtest fluctuations. Results indicated that Sattler's revised tables yielded more conservative estimates of subtest scatter than those originally reported…
Descriptors: Intelligence Tests, Scoring Formulas, Statistical Analysis, Statistical Distributions
Peer reviewed Peer reviewed
Clampit, M. K.; Silver, Stephen J. – Journal of School Psychology, 1986
Presents four tables for the statistical interpretation of factor scores on the Wechsler Intelligence Scale for Children-Revised. Provides the percentile equivalents of factor scores; the significance of differences between factor scores; the frequency with which specified discrepancies occur; the significance of differences between a factor score…
Descriptors: Factor Analysis, Intelligence Tests, Scores, Scoring Formulas
Berk, Ronald A. – 1980
Seventeen statistics for measuring the reliability of criterion-referenced tests were critically reviewed. The review was organized into two sections: (1) a discussion of preliminary considerations to provide a foundation for choosing the appropriate category of "reliability" (threshold loss function, squared-error loss-function, or…
Descriptors: Criterion Referenced Tests, Cutting Scores, Scoring Formulas, Statistical Analysis
Boldt, Robert F. – 1971
This paper presents the development of scoring functions for use in conjunction with standard multiple-choice items. In addition to the usual indication of the correct alternative, the examinee is to indicate his personal probability of the correctness of his response. Both linear and quadratic polynomial scoring functions are examined for…
Descriptors: Confidence Testing, Guessing (Tests), Multiple Choice Tests, Response Style (Tests)
Halperin, Silas – 1973
Factor loadings, used directly or as the basis of binary values, are not appropriate as weights to produce component scores from a rotated solution. A series of examples showing the results of an incorrect measurement of components is given. Several correlation matrices were taken from books on factor analysis and multivariate analysis. Each…
Descriptors: Correlation, Factor Analysis, Orthogonal Rotation, Research Reports
Peer reviewed Peer reviewed
Frary, Robert B.; Hutchinson, T.P. – Educational and Psychological Measurement, 1982
Alternate versions of Hutchinson's theory were compared, and one which implies the existence of partial knowledge was found to be better than one which implies that an appropriate measure of ability is obtained by applying the conventional correction for guessing. (Author/PN)
Descriptors: Guessing (Tests), Latent Trait Theory, Multiple Choice Tests, Scoring Formulas
Lowry, Stephen R. – 1977
The effects of luck and misinformation on ability of multiple-choice test scores to estimate examinee ability were investigated. Two measures of examinee ability were defined. Misinformation was shown to have little effect on ability of raw scores and a substantial effect on ability of corrected-for-guessing scores to estimate examinee ability.…
Descriptors: Ability, College Students, Guessing (Tests), Multiple Choice Tests
Bayuk, Robert J. – 1973
An investigation was conducted to determine the effects of response-category weighting and item weighting on reliability and predictive validity. Response-category weighting refers to scoring in which, for each category (including omit and "not read"), a weight is assigned that is proportional to the mean criterion score of examinees selecting…
Descriptors: Aptitude Tests, Correlation, Predictive Validity, Research Reports
Powell, J. C. – 1979
The educational significance of wrong answers on multiple choice tests was investigated in over 4,000 subjects, aged 7 to 20. Gorham's Proverbs Test--which requires the interpretation of a proverb sentence--was administered and repeated five months later. Four questions were addressed: (1) what can the pattern of answer choice, across age, using…
Descriptors: Age Differences, Cognitive Development, Cognitive Processes, Elementary Secondary Education
Echternacht, Gary; Plas, Jeanne M. – NCME, 1977
While most school districts believe they understand grade equivalent scores, teachers, parents, and measurement specialists frequently misinterpret this apparently simple statistical expression. Echternacht's article describes the construction, application, and interpretation of grade equivalent scores from the test publisher's perspective.…
Descriptors: Achievement Rating, Achievement Tests, Elementary Education, Grade Equivalent Scores
Lawrence, Ida M.; Schmidt, Amy Elizabeth – College Entrance Examination Board, 2001
The SAT® I: Reasoning Test is administered seven times a year. Primarily for security purposes, several different test forms are given at each administration. How is it possible to compare scores obtained from different test forms and from different test administrations? The purpose of this paper is to provide an overview of the statistical…
Descriptors: Scores, Comparative Analysis, Standardized Tests, College Entrance Examinations
Sands, William A. – 1975
In order to develop tools for use in the selection and vocational-educational guidance of U.S. Naval Academy midshipmen, three empirically-based scales, designed using the Strong Vocational Interest Blank (SVIB), were developed to predict three criteria: (1) disenrollment for academic reasons, (2) disenrollment for motivational reasons, and (3)…
Descriptors: Admission (School), Career Guidance, College Students, Comparative Analysis
Warm, Thomas A. – 1978
This primer is an introduction to item response theory (also called item characteristic curve theory, or latent trait theory) as it is used most commonly--for scoring multiple choice achievement or aptitude tests. Written for the testing practitioner with minimum training in statistics and psychometrics, it presents and illustrates the basic…
Descriptors: Ability Identification, Achievement Tests, Adaptive Testing, Aptitude Tests
Haladyna, Thomas – 1975
A central problem for the user of domain-referenced tests in instruction is deciding who has passed and who has failed. Two procedures were presented and discussed. The first, employing classical test theory, was found to be more useful for larger domains and where the passing standard is 70 percent or less. The sampling procedure suggested by…
Descriptors: Academic Achievement, Academic Standards, Criterion Referenced Tests, Decision Making Skills
Previous Page | Next Page »
Pages: 1  |  2