Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 6 |
Descriptor
Scoring Formulas | 27 |
Test Construction | 27 |
Test Validity | 27 |
Test Reliability | 19 |
Scoring | 10 |
Multiple Choice Tests | 8 |
Test Interpretation | 8 |
Testing | 6 |
Item Analysis | 5 |
Guessing (Tests) | 4 |
Psychometrics | 4 |
More ▼ |
Source
Author
Echternacht, Gary | 3 |
Ahmed, Ayesha | 1 |
Atkinson, George F. | 1 |
Brennan, Robert L. | 1 |
Choi, Soo Hyuk | 1 |
Diamond, James J. | 1 |
Doadt, Edward | 1 |
Docktor, Jennifer L. | 1 |
Dornfeld, Jay | 1 |
Frary, Robert B. | 1 |
Frodermann, Evan | 1 |
More ▼ |
Publication Type
Reports - Research | 11 |
Journal Articles | 7 |
Speeches/Meeting Papers | 3 |
Reports - Descriptive | 2 |
Guides - Non-Classroom | 1 |
Reports - Evaluative | 1 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 2 |
Postsecondary Education | 2 |
Secondary Education | 2 |
Elementary Education | 1 |
Elementary Secondary Education | 1 |
High Schools | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Audience
Location
India | 1 |
Israel | 1 |
Minnesota | 1 |
United Kingdom | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Defining Issues Test | 1 |
Graduate Record Examinations | 1 |
Group Embedded Figures Test | 1 |
Learning Style Inventory | 1 |
SAT (College Admission Test) | 1 |
What Works Clearinghouse Rating
Yun, Young Ho; Kim, Yaeji; Sim, Jin A.; Choi, Soo Hyuk; Lim, Cheolil; Kang, Joon-ho – Journal of School Health, 2018
Background: The objective of this study was to develop the School Health Score Card (SHSC) and validate its psychometric properties. Methods: The development of the SHSC questionnaire included 3 phases: item generation, construction of domains and items, and field testing with validation. To assess the instrument's reliability and validity, we…
Descriptors: School Health Services, Psychometrics, Test Construction, Test Validity
Lee, Minji K.; Sweeney, Kevin; Melican, Gerald J. – Educational Assessment, 2017
This study investigates the relationships among factor correlations, inter-item correlations, and the reliability estimates of subscores, providing a guideline with respect to psychometric properties of useful subscores. In addition, it compares subscore estimation methods with respect to reliability and distinctness. The subscore estimation…
Descriptors: Scores, Test Construction, Test Reliability, Test Validity
Docktor, Jennifer L.; Dornfeld, Jay; Frodermann, Evan; Heller, Kenneth; Hsu, Leonardo; Jackson, Koblar Alan; Mason, Andrew; Ryan, Qing X.; Yang, Jie – Physical Review Physics Education Research, 2016
Problem solving is a complex process valuable in everyday life and crucial for learning in the STEM fields. To support the development of problem-solving skills it is important for researchers and curriculum developers to have practical tools that can measure the difference between novice and expert problem-solving performance in authentic…
Descriptors: Introductory Courses, Physics, Problem Solving, Scoring Rubrics
Gafoor, K. Abdul; Naseer, A. R. – Online Submission, 2015
With a view to support instruction, formative and summative assessment and to provide model handwriting performance for students to compare their own performance, a Malayalam handwriting scale is developed. Data from 2640 school students belonging to Malappuram, Palakkad and Kozhikode districts, sampled by taking 240 students per each grade…
Descriptors: Formative Evaluation, Summative Evaluation, Handwriting, Performance Based Assessment
Ahmed, Ayesha; Pollitt, Alastair – Assessment in Education: Principles, Policy & Practice, 2011
At the heart of most assessments lies a set of questions, and those who write them must achieve "two" things. Not only must they ensure that each question elicits the kind of performance that shows how "good" pupils are at the subject, but they must also ensure that each mark scheme gives more marks to those who are…
Descriptors: Academic Achievement, Classification, Educational Quality, Quality Assurance
Frary, Robert B. – 1980
Ordinal response modes for multiple choice tests are those under which the examinee marks one or more choices in an effort to identify the correct choice, or include it in a proper subset of the choices. Two ordinal response modes: answer-until-correct, and Coomb's elimination of choices which examinees identify as wrong, were analyzed for scoring…
Descriptors: Guessing (Tests), Multiple Choice Tests, Responses, Scoring
Reilly, Richard R. – 1972
Because previous reports have suggested that the lowered validity of tests scored with empirical option weights might be explained by a capitalization of the keying procedures on omitting tendencies, a procedure was devised to key options empirically with a "correction-for-guessing" constraint. Use of the new procedure with Graduate…
Descriptors: Correlation, Data Analysis, Guessing (Tests), Mathematical Applications

Scott, William A. – Educational and Psychological Measurement, 1972
Descriptors: Item Sampling, Mathematical Applications, Scoring Formulas, Statistical Analysis
Atkinson, George F.; Doadt, Edward – Assessment in Higher Education, 1980
Some perceived difficulties with conventional multiple choice tests are mentioned, and a modified form of examination is proposed. It uses a computer program to award partial marks for partially correct answers, full marks for correct answers, and check for widespread misunderstanding of an item or subject. (MSE)
Descriptors: Achievement Tests, Computer Assisted Testing, Higher Education, Multiple Choice Tests
Rest, James R. – 1975
This paper describes the rationale for the Defining Issues Test (DIT), an objective test of moral judgment which attempts to improve upon three aspects of Kohlberg's research: data collection, categorization of moral judgments (the scoring system), and method of indexing a subject's progress in a developmental sequence. In each case, the way in…
Descriptors: Comparative Analysis, Data Analysis, Data Collection, Human Development

Echternacht, Gary – Educational and Psychological Measurement, 1976
Compares various item option scoring methods with respect to coefficient alpha and a concurrent validity coefficient. Scoring methods compared were: formula scoring, a priori scoring, empirical scoring with an internal criterion, and two modifications of formula scoring. The empirically determined scoring system is seen as superior. (RC)
Descriptors: Aptitude Tests, Multiple Choice Tests, Response Style (Tests), Scoring Formulas

Diamond, James J. – Journal of Educational Measurement, 1975
Investigates the reliability and validity of scores yielded from a new scoring formula. (Author/DEP)
Descriptors: Guessing (Tests), Multiple Choice Tests, Objective Tests, Scoring
Hendrickson, Gerry F.; Green, Bert F., Jr. – 1972
It has been shown that Guttman weighting of test options results in marked increases in the internal consistency of a test. However, the effect of this type of weighting on the structure of the test is not known. Hence, the purpose of this study is to compare the factor structure of Guttman-weighted and rights-only-weighted tests and to relate the…
Descriptors: Analysis of Variance, Correlation, Factor Analysis, Item Analysis
Hambleton, Ronald K.; Novick, Melvin R. – 1972
In this paper, an attempt has been made to synthesize some of the current thinking in the area of criterion-referenced testing as well as to provide the beginning of an integration of theory and method for such testing. Since criterion-referenced testing is viewed from a decision-theoretic point of view, approaches to reliability and validity…
Descriptors: Criterion Referenced Tests, Measurement Instruments, Measurement Techniques, Scaling
Kahl, Peter W. – Neusprachliche Mitteilungen, 1971
Descriptors: Achievement Tests, English (Second Language), Language Tests, Scoring
Previous Page | Next Page ยป
Pages: 1 | 2