NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)7
Since 2006 (last 20 years)11
Audience
Researchers1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 35 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Yun, Young Ho; Kim, Yaeji; Sim, Jin A.; Choi, Soo Hyuk; Lim, Cheolil; Kang, Joon-ho – Journal of School Health, 2018
Background: The objective of this study was to develop the School Health Score Card (SHSC) and validate its psychometric properties. Methods: The development of the SHSC questionnaire included 3 phases: item generation, construction of domains and items, and field testing with validation. To assess the instrument's reliability and validity, we…
Descriptors: School Health Services, Psychometrics, Test Construction, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Morgan, Grant B.; Moore, Courtney A.; Floyd, Harlee S. – Journal of Psychoeducational Assessment, 2018
Although content validity--how well each item of an instrument represents the construct being measured--is foundational in the development of an instrument, statistical validity is also important to the decisions that are made based on the instrument. The primary purpose of this study is to demonstrate how simulation studies can be used to assist…
Descriptors: Simulation, Decision Making, Test Construction, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Gierl, Mark J.; Bulut, Okan; Guo, Qi; Zhang, Xinxin – Review of Educational Research, 2017
Multiple-choice testing is considered one of the most effective and enduring forms of educational assessment that remains in practice today. This study presents a comprehensive review of the literature on multiple-choice testing in education focused, specifically, on the development, analysis, and use of the incorrect options, which are also…
Descriptors: Multiple Choice Tests, Difficulty Level, Accuracy, Error Patterns
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Minji K.; Sweeney, Kevin; Melican, Gerald J. – Educational Assessment, 2017
This study investigates the relationships among factor correlations, inter-item correlations, and the reliability estimates of subscores, providing a guideline with respect to psychometric properties of useful subscores. In addition, it compares subscore estimation methods with respect to reliability and distinctness. The subscore estimation…
Descriptors: Scores, Test Construction, Test Reliability, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Zaidi, Nikki B.; Hwang, Charles; Scott, Sara; Stallard, Stefanie; Purkiss, Joel; Hortsch, Michael – Anatomical Sciences Education, 2017
Bloom's taxonomy was adopted to create a subject-specific scoring tool for histology multiple-choice questions (MCQs). This Bloom's Taxonomy Histology Tool (BTHT) was used to analyze teacher- and student-generated quiz and examination questions from a graduate level histology course. Multiple-choice questions using histological images were…
Descriptors: Taxonomy, Anatomy, Graduate Students, Scoring Formulas
Peer reviewed Peer reviewed
Direct linkDirect link
Leslie, Laura J.; Gorman, Paul C. – European Journal of Engineering Education, 2017
Student engagement is vital in enhancing the student experience and encouraging deeper learning. Involving students in the design of assessment criteria is one way in which to increase student engagement. In 2011, a marking matrix was used at Aston University (UK) for logbook assessment (Group One) in a project-based learning module. The next…
Descriptors: Undergraduate Students, Evaluation Criteria, Student Participation, Learner Engagement
Peer reviewed Peer reviewed
Direct linkDirect link
Docktor, Jennifer L.; Dornfeld, Jay; Frodermann, Evan; Heller, Kenneth; Hsu, Leonardo; Jackson, Koblar Alan; Mason, Andrew; Ryan, Qing X.; Yang, Jie – Physical Review Physics Education Research, 2016
Problem solving is a complex process valuable in everyday life and crucial for learning in the STEM fields. To support the development of problem-solving skills it is important for researchers and curriculum developers to have practical tools that can measure the difference between novice and expert problem-solving performance in authentic…
Descriptors: Introductory Courses, Physics, Problem Solving, Scoring Rubrics
Buri, John R.; Cromett, Cristina E.; Post, Maria C.; Landis, Anna Marie; Alliegro, Marissa C. – Online Submission, 2015
Rationale is presented for the derivation of a new measure of stressful life events for use with students [Negative Life Events Scale for Students (NLESS)]. Ten stressful life events questionnaires were reviewed, and the more than 600 items mentioned in these scales were culled based on the following criteria: (a) only long-term and unpleasant…
Descriptors: Experience, Social Indicators, Stress Variables, Affective Measures
Gafoor, K. Abdul; Naseer, A. R. – Online Submission, 2015
With a view to support instruction, formative and summative assessment and to provide model handwriting performance for students to compare their own performance, a Malayalam handwriting scale is developed. Data from 2640 school students belonging to Malappuram, Palakkad and Kozhikode districts, sampled by taking 240 students per each grade…
Descriptors: Formative Evaluation, Summative Evaluation, Handwriting, Performance Based Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Taskinen, Päivi H.; Steimel, Jochen; Gräfe, Linda; Engell, Sebastian; Frey, Andreas – Peabody Journal of Education, 2015
This study examined students' competencies in engineering education at the university level. First, we developed a competency model in one specific field of engineering: process dynamics and control. Then, the theoretical model was used as a frame to construct test items to measure students' competencies comprehensively. In the empirical…
Descriptors: Models, Engineering Education, Test Items, Outcome Measures
Frary, Robert B. – 1980
Ordinal response modes for multiple choice tests are those under which the examinee marks one or more choices in an effort to identify the correct choice, or include it in a proper subset of the choices. Two ordinal response modes: answer-until-correct, and Coomb's elimination of choices which examinees identify as wrong, were analyzed for scoring…
Descriptors: Guessing (Tests), Multiple Choice Tests, Responses, Scoring
Kazelskis, Richard; And Others – 1987
Numerous techniques are available for determining cutoff scores for distinguishing between proficient and non-proficient examinees. One of the more commonly cited techniques for standard setting is the Nedelsky Method. In response to criticism of this method, Gross (1985) presented a revised Nedelsky technique. However, no research beyond that…
Descriptors: Competence, Cutting Scores, Measurement Techniques, Scoring Formulas
Peer reviewed Peer reviewed
Pearson, Lea; Elliott, Colin – Journal of Moral Education, 1980
Developed as part of the British Ability Scales for ages 2-17, the Social Reasoning Scale was initially based on Kohlberg's invariant moral development stages, although substantial modifications were later introduced. In its standardization, an age progression was noted. Administration procedures, scoring criteria, and an illustrative example are…
Descriptors: Developmental Stages, Elementary Secondary Education, Intelligence Tests, Moral Development
Hutchinson, T. P. – 1984
One means of learning about the processes operating in a multiple choice test is to include some test items, called nonsense items, which have no correct answer. This paper compares two versions of a mathematical model of test performance to interpret test data that includes both genuine and nonsense items. One formula is based on the usual…
Descriptors: Foreign Countries, Guessing (Tests), Mathematical Models, Multiple Choice Tests
Peer reviewed Peer reviewed
Albanese, Mark A. – Evaluation and the Health Professions, 1982
Findings regarding formats and scoring formulas for multiple-choice test items with more than one correct response are presented. Strong cluing effects in the Type K format, increasing the correct score percentage and reducing test reliability, recommend using the Type X format. Alternative scoring methods are discussed. (Author/CM)
Descriptors: Health Occupations, Multiple Choice Tests, Professional Education, Response Style (Tests)
Previous Page | Next Page »
Pages: 1  |  2  |  3