NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 136 to 150 of 582 results Save | Export
Peer reviewed Peer reviewed
Hughes, Garry L.; Prien, Erich P. – Personnel Psychology, 1986
Investigated psychometric properties of three methods of scoring a Mixed Standard Scale performance evaluation: a patterned procedure, simple nonpatterned scoring procedure and procedure assigning differential weights to statements on the basis of scale values provided by subject matter experts. Found no differences in the score distribution…
Descriptors: Evaluation Methods, Interrater Reliability, Scoring, Scoring Formulas
Peer reviewed Peer reviewed
Boldt, R. R. – Journal of Educational and Psychological Measurement, 1974
Descriptors: Confidence Testing, Guessing (Tests), Scoring Formulas, Testing Problems
Peer reviewed Peer reviewed
Sheehan, Daniel S.; Hambleton, Ronald K. – Journal of Educational and Psychological Measurement, 1974
Descriptors: Computer Programs, Scoring, Scoring Formulas, Tests
Peer reviewed Peer reviewed
Livingston, Samuel A. – Educational and Psychological Measurement, 1980
A specified minimum performance level can be translated into a minimum passing score for the written test by measuring the performance of students whose written test scores are near the desired cutoff score. Stochastic approximation methods accomplish this purpose. The up-and-down method and the Robbins-Monro process are compared. (Author/RL)
Descriptors: Cutting Scores, Educational Testing, Occupational Tests, Scoring Formulas
Peer reviewed Peer reviewed
Gorsuch, Richard L. – Educational and Psychological Measurement, 1980
Kaiser and Michael reported a formula for factor scores giving an internal consistency reliability and its square root, the domain validity. Using this formula is inappropriate if variables are included which have trival weights rather than salient weights for the factor for which the score is being computed. (Author/RL)
Descriptors: Factor Analysis, Factor Structure, Scoring Formulas, Test Reliability
Mazer, Irene R. – 1981
The need to determine eligibility for a program for intellectually gifted students resulted in combining deviation scores on achievement, aptitude, ability and motivation measures into a matrix score. These matrix scores and the students' success in the program were determined for present participants. Students were classified as successful or…
Descriptors: Eligibility, Evaluation Methods, Gifted, Scores
Christopherson, Steven L. – 1979
A procedure for creating a numerical index which can be used to score and rank summaries of prose passages is described. The quality of a summary is assumed to be a function of two variables: the number of important ideas mentioned, and the number of words used. Scoring involves multiplying two ratios: a power ratio and an efficiency ratio. The…
Descriptors: Essay Tests, Higher Education, Prose, Scoring Formulas
Gerl, Sister Marion Joseph – Journal of Business Education, 1974
The procedures and advantages of the gross speed--two percent-of-error method in scoring typewriting timed writings are presented. The method makes allowance for errors according to the number of opportunities for error. A mailing address for the typing scoring chart and further information on the method is included. (AG)
Descriptors: Business Education, Business Skills, Scoring Formulas, Timed Tests
Peer reviewed Peer reviewed
Lord, Frederic M. – Educational and Psychological Measurement, 1973
A group of 21 students was tested under a time limit considerably shorter than should have been allowed. This report describes a tryout of a method for estimating the power'' scores that would have been obtained if the students had had enough time to finish. (Author/CB)
Descriptors: Mathematical Models, Scoring Formulas, Statistical Analysis, Theories
Peer reviewed Peer reviewed
Layton, Frances – Alberta Journal of Educational Research, 1973
Purpose of this study was to test a short version of the Stanford-Binet, Form L-M using a group covering a wide age and ability level in an attempt to reduce the time factor involved in administration of some of the S-B tests, without sacrificing the reported accuracy. (Author/CB)
Descriptors: Intelligence Tests, Scoring Formulas, Tables (Data), Test Construction
Peer reviewed Peer reviewed
Sattler, Jerome M.; Ryan, Joseph J. – Journal of Clinical Psychology, 1973
This study investigated how well raters who ranged from undergraduate students to experts in the field of linguistics would agree with the scoring examples given in the WISC manual for selected Vocabulary subtest responses. (Authors)
Descriptors: Evaluation Criteria, Examiners, Response Style (Tests), Scoring Formulas
Peer reviewed Peer reviewed
Rozeboom, William W. – Psychometrika, 1979
For idealized item configurations, equal item weights are often virtually as good for a particular predictive purpose as the item weights that are theoretically optimal. What has not been clear, however, is what happens to the similarity when the item configuration's variance structure is complex. (Author/CTM)
Descriptors: Multiple Regression Analysis, Predictor Variables, Scoring Formulas, Weighted Scores
Peer reviewed Peer reviewed
Hutchinson, T. P. – Contemporary Educational Psychology, 1980
In scoring multiple-choice tests, a score of 1 is given to right answers, 0 to unanswered questions, and some negative score to wrong answers. This paper discusses the relation of this negative score to the assumption made about the partial knowledge with the subjects may have. (Author/GDC)
Descriptors: Guessing (Tests), Knowledge Level, Multiple Choice Tests, Scoring Formulas
Koppelaar, Henk; And Others – Tijdschrift voor Onderwijsresearch, 1977
Using parameter estimates the computer program calculates the negative hypergeometric distribution and computes the classification proportions: suitable-accepted, suitable-not accepted, not suitable-accepted, and not suitable-not accepted. (RC)
Descriptors: Classification, Computer Programs, Cutting Scores, Scoring Formulas
Peer reviewed Peer reviewed
Direct linkDirect link
Parker, Kevin R.; Chao, Joseph T.; Ottaway, Thomas A.; Chang, Jane – Journal of Information Technology Education, 2006
The selection of a programming language for introductory courses has long been an informal process involving faculty evaluation, discussion, and consensus. As the number of faculty, students, and language options grows, this process becomes increasingly unwieldy. As it stands, the process currently lacks structure and replicability. Establishing a…
Descriptors: Programming Languages, Introductory Courses, Selection, Criteria
Pages: 1  |  ...  |  6  |  7  |  8  |  9  |  10  |  11  |  12  |  13  |  14  |  ...  |  39