NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)4
Since 2006 (last 20 years)9
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 33 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Yun, Young Ho; Kim, Yaeji; Sim, Jin A.; Choi, Soo Hyuk; Lim, Cheolil; Kang, Joon-ho – Journal of School Health, 2018
Background: The objective of this study was to develop the School Health Score Card (SHSC) and validate its psychometric properties. Methods: The development of the SHSC questionnaire included 3 phases: item generation, construction of domains and items, and field testing with validation. To assess the instrument's reliability and validity, we…
Descriptors: School Health Services, Psychometrics, Test Construction, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Bardhoshi, Gerta; Erford, Bradley T. – Measurement and Evaluation in Counseling and Development, 2017
Precision is a key facet of test development, with score reliability determined primarily according to the types of error one wants to approximate and demonstrate. This article identifies and discusses several primary forms of reliability estimation: internal consistency (i.e., split-half, KR-20, a), test-retest, alternate forms, interscorer, and…
Descriptors: Scores, Test Reliability, Accuracy, Pretests Posttests
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Minji K.; Sweeney, Kevin; Melican, Gerald J. – Educational Assessment, 2017
This study investigates the relationships among factor correlations, inter-item correlations, and the reliability estimates of subscores, providing a guideline with respect to psychometric properties of useful subscores. In addition, it compares subscore estimation methods with respect to reliability and distinctness. The subscore estimation…
Descriptors: Scores, Test Construction, Test Reliability, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Docktor, Jennifer L.; Dornfeld, Jay; Frodermann, Evan; Heller, Kenneth; Hsu, Leonardo; Jackson, Koblar Alan; Mason, Andrew; Ryan, Qing X.; Yang, Jie – Physical Review Physics Education Research, 2016
Problem solving is a complex process valuable in everyday life and crucial for learning in the STEM fields. To support the development of problem-solving skills it is important for researchers and curriculum developers to have practical tools that can measure the difference between novice and expert problem-solving performance in authentic…
Descriptors: Introductory Courses, Physics, Problem Solving, Scoring Rubrics
Gafoor, K. Abdul; Naseer, A. R. – Online Submission, 2015
With a view to support instruction, formative and summative assessment and to provide model handwriting performance for students to compare their own performance, a Malayalam handwriting scale is developed. Data from 2640 school students belonging to Malappuram, Palakkad and Kozhikode districts, sampled by taking 240 students per each grade…
Descriptors: Formative Evaluation, Summative Evaluation, Handwriting, Performance Based Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Taskinen, Päivi H.; Steimel, Jochen; Gräfe, Linda; Engell, Sebastian; Frey, Andreas – Peabody Journal of Education, 2015
This study examined students' competencies in engineering education at the university level. First, we developed a competency model in one specific field of engineering: process dynamics and control. Then, the theoretical model was used as a frame to construct test items to measure students' competencies comprehensively. In the empirical…
Descriptors: Models, Engineering Education, Test Items, Outcome Measures
Peer reviewed Peer reviewed
Direct linkDirect link
Ahmed, Ayesha; Pollitt, Alastair – Assessment in Education: Principles, Policy & Practice, 2011
At the heart of most assessments lies a set of questions, and those who write them must achieve "two" things. Not only must they ensure that each question elicits the kind of performance that shows how "good" pupils are at the subject, but they must also ensure that each mark scheme gives more marks to those who are…
Descriptors: Academic Achievement, Classification, Educational Quality, Quality Assurance
Frary, Robert B. – 1980
Ordinal response modes for multiple choice tests are those under which the examinee marks one or more choices in an effort to identify the correct choice, or include it in a proper subset of the choices. Two ordinal response modes: answer-until-correct, and Coomb's elimination of choices which examinees identify as wrong, were analyzed for scoring…
Descriptors: Guessing (Tests), Multiple Choice Tests, Responses, Scoring
Reilly, Richard R. – 1972
Because previous reports have suggested that the lowered validity of tests scored with empirical option weights might be explained by a capitalization of the keying procedures on omitting tendencies, a procedure was devised to key options empirically with a "correction-for-guessing" constraint. Use of the new procedure with Graduate…
Descriptors: Correlation, Data Analysis, Guessing (Tests), Mathematical Applications
Peer reviewed Peer reviewed
Duncan, George T.; Milton, E. O. – Psychometrika, 1978
A multiple-answer multiple-choice test is one which offers several alternate choices for each stem and any number of those choices may be considered to be correct. In this article, a class of scoring procedures called the binary class is discussed. (Author/JKS)
Descriptors: Answer Keys, Measurement Techniques, Multiple Choice Tests, Scoring Formulas
Niemi, David; Wang, Jia; Wang, Haiwen; Vallone, Julia; Griffin, Noelle – National Center for Research on Evaluation, Standards, and Student Testing (CRESST), 2007
There are usually many testing activities going on in a school, with different tests serving different purposes, thus organization and planning are key in creating an efficient system in assessing the most important educational objectives. In the ideal case, an assessment system will be able to inform on student learning, instruction and…
Descriptors: School Administration, Educational Objectives, Administration, Public Schools
Peer reviewed Peer reviewed
Eakin, Richard R.; Long, Clifford A. – Educational and Psychological Measurement, 1977
A scoring technique for true-false tests is presented. The technique, paired item scoring, involves combining two statements and having the student select one of the four resultants possible: true-true, false-true, true-false, and false-false. The combined item is treated as a multiple choice item. (Author/JKS)
Descriptors: Guessing (Tests), Measurement Techniques, Multiple Choice Tests, Objective Tests
Peer reviewed Peer reviewed
Stauffer, A. J. – Educational and Psychological Measurement, 1974
Descriptors: Attitude Change, Attitude Measures, Comparative Analysis, Educational Research
Peer reviewed Peer reviewed
Echternacht, Gary – Educational and Psychological Measurement, 1976
Compares various item option scoring methods with respect to coefficient alpha and a concurrent validity coefficient. Scoring methods compared were: formula scoring, a priori scoring, empirical scoring with an internal criterion, and two modifications of formula scoring. The empirically determined scoring system is seen as superior. (RC)
Descriptors: Aptitude Tests, Multiple Choice Tests, Response Style (Tests), Scoring Formulas
Peer reviewed Peer reviewed
Albanese, Mark A. – Evaluation and the Health Professions, 1982
Findings regarding formats and scoring formulas for multiple-choice test items with more than one correct response are presented. Strong cluing effects in the Type K format, increasing the correct score percentage and reducing test reliability, recommend using the Type X format. Alternative scoring methods are discussed. (Author/CM)
Descriptors: Health Occupations, Multiple Choice Tests, Professional Education, Response Style (Tests)
Previous Page | Next Page »
Pages: 1  |  2  |  3