NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)3
Since 2006 (last 20 years)5
Audience
Researchers1
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 16 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Yun, Young Ho; Kim, Yaeji; Sim, Jin A.; Choi, Soo Hyuk; Lim, Cheolil; Kang, Joon-ho – Journal of School Health, 2018
Background: The objective of this study was to develop the School Health Score Card (SHSC) and validate its psychometric properties. Methods: The development of the SHSC questionnaire included 3 phases: item generation, construction of domains and items, and field testing with validation. To assess the instrument's reliability and validity, we…
Descriptors: School Health Services, Psychometrics, Test Construction, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Morgan, Grant B.; Moore, Courtney A.; Floyd, Harlee S. – Journal of Psychoeducational Assessment, 2018
Although content validity--how well each item of an instrument represents the construct being measured--is foundational in the development of an instrument, statistical validity is also important to the decisions that are made based on the instrument. The primary purpose of this study is to demonstrate how simulation studies can be used to assist…
Descriptors: Simulation, Decision Making, Test Construction, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Gierl, Mark J.; Bulut, Okan; Guo, Qi; Zhang, Xinxin – Review of Educational Research, 2017
Multiple-choice testing is considered one of the most effective and enduring forms of educational assessment that remains in practice today. This study presents a comprehensive review of the literature on multiple-choice testing in education focused, specifically, on the development, analysis, and use of the incorrect options, which are also…
Descriptors: Multiple Choice Tests, Difficulty Level, Accuracy, Error Patterns
Buri, John R.; Cromett, Cristina E.; Post, Maria C.; Landis, Anna Marie; Alliegro, Marissa C. – Online Submission, 2015
Rationale is presented for the derivation of a new measure of stressful life events for use with students [Negative Life Events Scale for Students (NLESS)]. Ten stressful life events questionnaires were reviewed, and the more than 600 items mentioned in these scales were culled based on the following criteria: (a) only long-term and unpleasant…
Descriptors: Experience, Social Indicators, Stress Variables, Affective Measures
Peer reviewed Peer reviewed
Direct linkDirect link
Taskinen, Päivi H.; Steimel, Jochen; Gräfe, Linda; Engell, Sebastian; Frey, Andreas – Peabody Journal of Education, 2015
This study examined students' competencies in engineering education at the university level. First, we developed a competency model in one specific field of engineering: process dynamics and control. Then, the theoretical model was used as a frame to construct test items to measure students' competencies comprehensively. In the empirical…
Descriptors: Models, Engineering Education, Test Items, Outcome Measures
Peer reviewed Peer reviewed
Duncan, George T.; Milton, E. O. – Psychometrika, 1978
A multiple-answer multiple-choice test is one which offers several alternate choices for each stem and any number of those choices may be considered to be correct. In this article, a class of scoring procedures called the binary class is discussed. (Author/JKS)
Descriptors: Answer Keys, Measurement Techniques, Multiple Choice Tests, Scoring Formulas
Hutchinson, T. P. – 1984
One means of learning about the processes operating in a multiple choice test is to include some test items, called nonsense items, which have no correct answer. This paper compares two versions of a mathematical model of test performance to interpret test data that includes both genuine and nonsense items. One formula is based on the usual…
Descriptors: Foreign Countries, Guessing (Tests), Mathematical Models, Multiple Choice Tests
Jaeger, Richard M. – 1980
Five statistical indices are developed and described which may be used for determining (1) when linear equating of two approximately parallel tests is adequate, and (2) whan a more complex method such as equipercentile equating must be used. The indices were based on: (1) similarity of cumulative score distributions; (2) shape of the raw-score to…
Descriptors: College Entrance Examinations, Difficulty Level, Equated Scores, Higher Education
Smith, Richard M. – 1982
There have been many attempts to formulate a procedure for extracting information from incorrect responses to multiple choice items, i.e., the assessment of partial knowledge. The results of these attempts can be described as inconsistent at best. It is hypothesized that these inconsistencies arise from three methodological problems: the…
Descriptors: Difficulty Level, Evaluation Methods, Goodness of Fit, Guessing (Tests)
Peer reviewed Peer reviewed
Lord, Frederic M. – Educational and Psychological Measurement, 1971
Descriptors: Ability, Adaptive Testing, Computer Oriented Programs, Difficulty Level
Plake, Barbara S.; Melican, Gerald J. – 1985
A methodology for investigating the influence of correction-for-guessing directions and formula scoring on test performance was studied. Experts in the test content field used a judgmental item appraisal system to estimate the knowledge of the minimally competent candidate (MCC) and to predict those items that the MCC would omit on the test under…
Descriptors: College Students, Guessing (Tests), Higher Education, Mathematics Tests
American Coll. Testing Program, Iowa City, IA. – 1981
UNIACT, a major component of the American College Testing (ACT) Assessment Program, is one of the first interest inventories to employ a new technique for ensuring sex fairness in the reporting of scores. UNIACT was constructed with the goal that distributions of career options suggested to males and females would be similar. It is intended to…
Descriptors: Adults, Career Planning, Interest Inventories, Minority Groups
Wood, Robert – Evaluation in Education: International Progress, 1977
The author surveys literature and practice, primarily in Great Britain and the United States, about multiple-choice testing, comments on criticisms, and defends the state of the art. Varous item types, item writing, test instructions and scoring formulas, item analysis, and test construction are discussed. An extensive bibliography is appended.…
Descriptors: Achievement Tests, Item Analysis, Multiple Choice Tests, Scoring Formulas
Maurelli, Vincent A.; Weiss, David J. – 1981
A monte carlo simulation was conducted to assess the effects in an adaptive testing strategy for test batteries of varying subtest order, subtest termination criterion, and variable versus fixed entry on the psychometric properties of an existent achievement test battery. Comparisons were made among conventionally administered tests and adaptive…
Descriptors: Achievement Tests, Adaptive Testing, Computer Assisted Testing, Latent Trait Theory
Legg, Sue M. – 1982
A case study of the Florida Teacher Certification Examination (FTCE) program was described to assist others launching the development of large scale item banks. FTCE has four subtests: Mathematics, Reading, Writing, and Professional Education. Rasch calibrated item banks have been developed for all subtests except Writing. The methods used to…
Descriptors: Cutting Scores, Difficulty Level, Field Tests, Item Analysis
Previous Page | Next Page »
Pages: 1  |  2