Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 4 |
Descriptor
Weighted Scores | 14 |
Multiple Choice Tests | 6 |
Test Reliability | 6 |
Correlation | 4 |
Test Validity | 4 |
Comparative Analysis | 3 |
Guessing (Tests) | 3 |
Measurement | 3 |
Scoring Formulas | 3 |
Computer Simulation | 2 |
Decision Making | 2 |
More ▼ |
Source
Journal of Educational… | 14 |
Author
Publication Type
Journal Articles | 7 |
Reports - Research | 5 |
Reports - Evaluative | 2 |
Education Level
Secondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 1 |
Program for the International… | 1 |
What Works Clearinghouse Rating
Castellano, Katherine E.; McCaffrey, Daniel F.; Lockwood, J. R. – Journal of Educational Measurement, 2023
The simple average of student growth scores is often used in accountability systems, but it can be problematic for decision making. When computed using a small/moderate number of students, it can be sensitive to the sample, resulting in inaccurate representations of growth of the students, low year-to-year stability, and inequities for…
Descriptors: Academic Achievement, Accountability, Decision Making, Computation
Guo, Hongwen; Dorans, Neil J. – Journal of Educational Measurement, 2020
We make a distinction between the operational practice of using an observed score to assess differential item functioning (DIF) and the concept of departure from measurement invariance (DMI) that conditions on a latent variable. DMI and DIF indices of effect sizes, based on the Mantel-Haenszel test of common odds ratio, converge under restricted…
Descriptors: Weighted Scores, Test Items, Item Response Theory, Measurement
Pokropek, Artur; Borgonovi, Francesca – Journal of Educational Measurement, 2020
This article presents the pseudo-equivalent group approach and discusses how it can enhance the quality of linking in the presence of nonequivalent groups. The pseudo-equivalent group approach allows to achieve pseudo-equivalence using propensity score reweighting techniques. We use it to perform linking to establish scale concordance between two…
Descriptors: Foreign Countries, Secondary School Students, Achievement Tests, International Assessment
Kim, Kyung Yong; Lee, Won-Chan – Journal of Educational Measurement, 2018
Reporting confidence intervals with test scores helps test users make important decisions about examinees by providing information about the precision of test scores. Although a variety of estimation procedures based on the binomial error model are available for computing intervals for test scores, these procedures assume that items are randomly…
Descriptors: Weighted Scores, Error of Measurement, Test Use, Decision Making

Raffeld, Paul – Journal of Educational Measurement, 1975
Results support the contention that a Guttman-weighted objective test can have psychometric properties that are superior to those of its unweighted counterpart, as long as omissions do not exist or are assigned a value equal to the mean of the k item alternative weights. (Author/BJG)
Descriptors: Multiple Choice Tests, Predictive Validity, Test Reliability, Test Validity

Patnaik, Durgadas; Traub, Ross E. – Journal of Educational Measurement, 1973
Two conventional scores and a weighted score on a group test of general intelligence were compared for reliability and predictive validity. (Editor)
Descriptors: Correlation, Intelligence Tests, Measurement, Predictive Validity

Hendrickson, Gerry F. – Journal of Educational Measurement, 1971
Descriptors: Correlation, Guessing (Tests), Multiple Choice Tests, Sex Differences

Collet, Leverne S. – Journal of Educational Measurement, 1971
The purpose of this paper was to provide an empirical test of the hypothesis that elimination scores are more reliable and valid than classical corrected-for-guessing scores or weighted-choice scores. The evidence presented supports the hypothesized superiority of elimination scoring. (Author)
Descriptors: Evaluation, Guessing (Tests), Multiple Choice Tests, Scoring Formulas

Reilly, Richard R.; Jackson, Rex – Journal of Educational Measurement, 1973
The present study suggests that although the reliability of an academic aptitude test given under formula-score condition can be increased substantially through empirical option weighting, much of the increase is due to the capitalization of the keying procedure on omitting tendencies which are reliable but not valid. (Author)
Descriptors: Aptitude Tests, Correlation, Factor Analysis, Item Sampling

Jacobs, Stanley S. – Journal of Educational Measurement, 1971
Descriptors: Guessing (Tests), Individual Differences, Measurement Techniques, Multiple Choice Tests

Kane, Michael T.; And Others – Journal of Educational Measurement, 1989
This paper develops a multiplicative model as a means of combining ratings of criticality and frequency of various activities involved in job analyses. The model incorporates adjustments to ensure that effective weights of criticality and frequency are appropriate. An example of the model's use is presented. (TJH)
Descriptors: Critical Incidents Method, Higher Education, Job Analysis, Licensing Examinations (Professions)

Plake, Barbara S.; Kane, Michael T. – Journal of Educational Measurement, 1991
Several methods for determining a passing score on an examination from individual raters' estimates of minimal pass levels were compared through simulation. The methods used differed in the weighting estimates for each item received in the aggregation process. Reasons why the simplest procedure is most preferred are discussed. (SLD)
Descriptors: Comparative Analysis, Computer Simulation, Cutting Scores, Estimation (Mathematics)

Kansup, Wanlop; Hakstian, A. Ralph – Journal of Educational Measurement, 1975
Effects of logically weighting incorrect item options in conventional tests and different scoring functions with confidence tests on reliability and validity were examined. Ninth graders took conventionally administered Verbal and Mathematical Reasoning tests, scored conventionally and by a procedure assigning degree-of-correctness weights to…
Descriptors: Comparative Analysis, Confidence Testing, Junior High School Students, Multiple Choice Tests

McKinley, Robert L. – Journal of Educational Measurement, 1988
Six procedures for combining sets of item response theory (IRT) item parameter estimates from different samples were evaluated using real and simulated response data. Results support use of covariance matrix-weighted averaging and a procedure using sample-size-weighted averaging of estimated item characteristic curves at the center of the ability…
Descriptors: College Entrance Examinations, Comparative Analysis, Computer Simulation, Estimation (Mathematics)