NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 7,351 to 7,365 of 10,088 results Save | Export
Anderson, Jonathan – Journal of Educational Data Processing, 1973
Describes some recent developments in computer programing taking place in Australia in the field of educational testing. Topics discussed include item banking and computer-assembled tests, test scoring for multiple choice and open-ended tests, item and test analysis, and test reporting. (Author/DN)
Descriptors: Computer Assisted Testing, Computer Oriented Programs, Computers, Educational Testing
Peer reviewed Peer reviewed
Broussard, Rolland L.; Mallett, Patrick W. – Educational and Psychological Measurement, 1972
Range of analysis available, from simple scoring to corrected scores with item analysis, makes the program attractive to both inexperienced and experienced test administrators. (Authors)
Descriptors: Computer Programs, Input Output, Item Analysis, Program Descriptions
Peer reviewed Peer reviewed
Scott, William A. – Educational and Psychological Measurement, 1972
Descriptors: Item Sampling, Mathematical Applications, Scoring Formulas, Statistical Analysis
Peer reviewed Peer reviewed
Gleser, Leon Jay – Educational and Psychological Measurement, 1972
Paper is concerned with the effect that ipsative scoring has upon a commonly used index of between-subtest correlation. (Author)
Descriptors: Comparative Analysis, Forced Choice Technique, Mathematical Applications, Measurement Techniques
Peer reviewed Peer reviewed
Evans, Franklin R.; Reilly, Richard R. – Journal of Educational Measurement, 1972
Study to determine whether potential bias exists in the Law School Admission Test (LSAT) which fee-free center candidates do not complete in the time available in as large a proportion as regular center candidates. (MB)
Descriptors: Black Students, Reaction Time, Response Style (Tests), Scoring
Peer reviewed Peer reviewed
Samejima, Fumiko – Psychometrika, 1972
This paper proposes a general model for free-response data collected for measuring a specified unidimensional psychological process; systematizes situations which vary with respect to the scoring level of items; and finds out general conditions for the operating characteristic of an item response category to provide a unique maximum likelihood…
Descriptors: Mathematical Applications, Mathematical Models, Mathematics, Measurement Techniques
Peer reviewed Peer reviewed
Collet, Leverne S. – Journal of Educational Measurement, 1971
The purpose of this paper was to provide an empirical test of the hypothesis that elimination scores are more reliable and valid than classical corrected-for-guessing scores or weighted-choice scores. The evidence presented supports the hypothesized superiority of elimination scoring. (Author)
Descriptors: Evaluation, Guessing (Tests), Multiple Choice Tests, Scoring Formulas
Peer reviewed Peer reviewed
Menacker, Julius; And Others – College and University, 1971
Descriptors: Academic Ability, Academic Achievement, Admission (School), Admission Criteria
Elashoff, Janet D. – Amer Educ Res J, 1969
Research carried out at the Stanford Center for Research and Development in Teaching (Stanford University), pursuant to a contract with the U.S. Office of Education under the provisions of the Cooperative Research Program.
Descriptors: Analysis of Covariance, Analysis of Variance, Correlation, Factor Analysis
Jenkins, Janet – Media in Education and Development, 1983
A description of MAIL (Micro-Assisted Learning), a microcomputer system for distance teaching which corrects mailed-in tests and generates letters commenting on each of the answers, is used to identify criteria which will help determine whether an innovation will be successful. These criteria include accessibility, operational ease, and learner…
Descriptors: Adult Education, Computer Oriented Programs, Distance Education, Foreign Countries
Peer reviewed Peer reviewed
Tatsuoka, Kikumi K.; Tatsuoka, Maurice M. – Journal of Educational Measurement, 1983
This study introduces the individual consistency index (ICI), which measures the extent to which patterns of responses to parallel sets of items remain consistent over time. ICI is used as an error diagnostic tool to detect aberrant response patterns resulting from the consistent application of erroneous rules of operation. (Author/PN)
Descriptors: Achievement Tests, Algorithms, Error Patterns, Measurement Techniques
Peer reviewed Peer reviewed
Chase, Clinton I. – Journal of Educational Measurement, 1983
Proposition analysis was used to equate the text base of two essays with different readability levels. Easier reading essays were given higher scores than difficult reading essays. The results appear to identify another noncontent influence on essay test scores, leaving increasingly less variance for differences in content. (Author/PN)
Descriptors: Content Analysis, Difficulty Level, Essay Tests, Higher Education
Peer reviewed Peer reviewed
McGrath, Robert E. V.; Burkhart, Barry R. – Journal of Clinical Psychology, 1983
Assessed whether accounting for variables in the scoring of the Social Readjustment Rating Scale (SRRS) would improve the predictive validity of the inventory. Results from 107 sets of questionnaires showed that income and level of education are significant predictors of the capacity to cope with stress. (JAC)
Descriptors: Adults, Coping, Educational Attainment, Income
van den Brink, Wulfert – Evaluation in Education: International Progress, 1982
Binomial models for domain-referenced testing are compared, emphasizing the assumptions underlying the beta-binomial model. Advantages and disadvantages are discussed. A proposed item sampling model is presented which takes the effect of guessing into account. (Author/CM)
Descriptors: Comparative Analysis, Criterion Referenced Tests, Item Sampling, Measurement Techniques
Peer reviewed Peer reviewed
Stewart, Michael J.; Blair, William O. – Perceptual and Motor Skills, 1982
Raters' agreement and the relative consistency of diving judges at a boy's competition were analyzed using intraclass correlations within 16 position x type combinations. Judges' variance was significant for 5 of the 16 combinations. Point estimates were generally greater for consistency than for raters' agreement about scores. (Author/CM)
Descriptors: Analysis of Variance, Competitive Selection, Correlation, Decision Making
Pages: 1  |  ...  |  487  |  488  |  489  |  490  |  491  |  492  |  493  |  494  |  495  |  ...  |  673