NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 41 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Joseph A. Rios; Jiayi Deng – Educational and Psychological Measurement, 2025
To mitigate the potential damaging consequences of rapid guessing (RG), a form of noneffortful responding, researchers have proposed a number of scoring approaches. The present simulation study examines the robustness of the most popular of these approaches, the unidimensional effort-moderated (EM) scoring procedure, to multidimensional RG (i.e.,…
Descriptors: Scoring, Guessing (Tests), Reaction Time, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Kroc, Edward; Olvera Astivia, Oscar L. – Educational and Psychological Measurement, 2022
Setting cutoff scores is one of the most common practices when using scales to aid in classification purposes. This process is usually done univariately where each optimal cutoff value is decided sequentially, subscale by subscale. While it is widely known that this process necessarily reduces the probability of "passing" such a test,…
Descriptors: Multivariate Analysis, Cutting Scores, Classification, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Schulte, Niklas; Holling, Heinz; Bürkner, Paul-Christian – Educational and Psychological Measurement, 2021
Forced-choice questionnaires can prevent faking and other response biases typically associated with rating scales. However, the derived trait scores are often unreliable and ipsative, making interindividual comparisons in high-stakes situations impossible. Several studies suggest that these problems vanish if the number of measured traits is high.…
Descriptors: Questionnaires, Measurement Techniques, Test Format, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Balogh, Jennifer; Bernstein, Jared; Cheng, Jian; Van Moere, Alistair; Townshend, Brent; Suzuki, Masanori – Educational and Psychological Measurement, 2012
A two-part experiment is presented that validates a new measurement tool for scoring oral reading ability. Data collected by the U.S. government in a large-scale literacy assessment of adults were analyzed by a system called VersaReader that uses automatic speech recognition and speech processing technologies to score oral reading fluency. In the…
Descriptors: Reading Fluency, Measures (Individuals), Scoring, Reading Ability
Peer reviewed Peer reviewed
Wilcox, Rand R. – Educational and Psychological Measurement, 1981
This paper describes and compares procedures for estimating the reliability of proficiency tests that are scored with latent structure models. Results suggest that the predictive estimate is the most accurate of the procedures. (Author/BW)
Descriptors: Criterion Referenced Tests, Scoring, Test Reliability
Peer reviewed Peer reviewed
Cuenot, Randall G.; Darbes, Alex – Educational and Psychological Measurement, 1982
Thirty-one clinical psychologists scored Comprehension, Similarities, and Vocabulary subtest items common to the Wechsler Intelligence Scale for Children (WISC) and the Wechsler Intelligence Scale for Children, Revised (WISC-R). The results on interrater scoring agreement suggest that the scoring of these subtests may be less subjective than…
Descriptors: Clinical Psychology, Intelligence Tests, Psychologists, Scoring
Peer reviewed Peer reviewed
Gorsuch, Richard L. – Educational and Psychological Measurement, 1980
Kaiser and Michael reported a formula for factor scores giving an internal consistency reliability and its square root, the domain validity. Using this formula is inappropriate if variables are included which have trival weights rather than salient weights for the factor for which the score is being computed. (Author/RL)
Descriptors: Factor Analysis, Factor Structure, Scoring Formulas, Test Reliability
Peer reviewed Peer reviewed
Holmes, Roy A.; And Others – Educational and Psychological Measurement, 1974
Descriptors: Chemistry, Multiple Choice Tests, Scoring Formulas, Test Reliability
Peer reviewed Peer reviewed
Echternacht, Gary – Educational and Psychological Measurement, 1975
Estimates for the variances of empirically determined scoring weights are given. It is also shown that test item writers should write distractors that discriminate on the criterion variable when this type of scoring is used. (Author)
Descriptors: Scoring, Statistical Analysis, Test Construction, Test Reliability
Peer reviewed Peer reviewed
Burton, Nancy W. – Educational and Psychological Measurement, 1981
This study was concerned with selecting a measure of scorer agreement for use with the National Assessment of Educational Progress. The simple percent of agreement and Cohen's kappa were compared. It was concluded that Cohen's kappa does not add sufficient information to make its calculation worthwhile. (Author/BW)
Descriptors: Educational Assessment, Elementary Secondary Education, Quality Control, Scoring
Peer reviewed Peer reviewed
Carroll, C. Dennis – Educational and Psychological Measurement, 1976
A computer program for item evaluation, reliability estimation, and test scoring is described. The program contains a variable format procedure allowing flexible input of responses. Achievement tests and affective scales may be analyzed. (Author)
Descriptors: Achievement Tests, Affective Measures, Computer Programs, Item Analysis
Peer reviewed Peer reviewed
Friedland, David L.; Michael, William B. – Educational and Psychological Measurement, 1987
A sample of 153 male police officers were subjects in a test validation study with two objectives: (1) to compare reliability estimates of a 16-item objective achievement examination scored by the conventional items right formula and by four different procedures; and (2) to obtain comparative concurrent validity coefficients of scores arising from…
Descriptors: Achievement Tests, Concurrent Validity, Correlation, Police
Peer reviewed Peer reviewed
Zimmerman, Donald W. – Educational and Psychological Measurement, 1972
Although a great deal of attention has been devoted over a period of years to the estimation of reliability from item statistics, there are still gaps in the mathematical derivation of the Kuder-Richardson results. The main purpose of this paper is to fill some of these gaps, using language consistent with modern probability theory. (Author)
Descriptors: Mathematical Applications, Probability, Scoring Formulas, Statistical Analysis
Peer reviewed Peer reviewed
Bejar, Issac I.; Weiss, David J. – Educational and Psychological Measurement, 1977
The reliabilities yielded by several differential option weighting scoring procedures were compared among themselves as well as against conventional testing. It was found that increases in reliability due to differential option weighting were a function of inter-item correlations. Suggestions for the implementation of differential option weighting…
Descriptors: Correlation, Forced Choice Technique, Item Analysis, Scoring Formulas
Peer reviewed Peer reviewed
Tinsley, Howard E. A.; And Others – Educational and Psychological Measurement, 1981
Two procedures for scoring the Recreation Experience Preference scales were investigated using data obtained from respondents engaged in outdoor recreational activities. Both procedures yielded acceptable levels of reliability and concurrent validity. When time is unimportant, the scale score strategy is preferred over the domain score strategy.…
Descriptors: Methods, Outdoor Activities, Participant Satisfaction, Recreational Activities
Previous Page | Next Page »
Pages: 1  |  2  |  3