NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 5 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Zumrawi, Abdel Azim; Bates, Simon P.; Schroeder, Marianne – Educational Research and Evaluation, 2014
This paper addresses the determination of statistically desirable response rates in students' surveys, with emphasis on assessing the effect of underlying variability in the student evaluation of teaching (SET). We discuss factors affecting the determination of adequate response rates and highlight challenges caused by non-response and lack of…
Descriptors: Inferences, Test Reliability, Response Rates (Questionnaires), Student Evaluation of Teacher Performance
Peer reviewed Peer reviewed
Direct linkDirect link
Ruiz-Primo, Maria Araceli; Li, Min; Wills, Kellie; Giamellaro, Michael; Lan, Ming-Chih; Mason, Hillary; Sands, Deanna – Journal of Research in Science Teaching, 2012
The purpose of this article is to address a major gap in the instructional sensitivity literature on how to develop instructionally sensitive assessments. We propose an approach to developing and evaluating instructionally sensitive assessments in science and test this approach with one elementary life-science module. The assessment we developed…
Descriptors: Effect Size, Inferences, Student Centered Curriculum, Test Construction
Peer reviewed Peer reviewed
Hambleton, Ronald K.; Jones, Russell W. – Educational Measurement: Issues and Practice, 1993
This National Council on Measurement in Education (NCME) instructional module compares classical test theory and item response theory and describes their applications in test development. Related concepts, models, and methods are explored; and advantages and disadvantages of each framework are reviewed. (SLD)
Descriptors: Comparative Analysis, Educational Assessment, Graphs, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wang, Xiaohui; Bradlow, Eric T.; Wainer, Howard – ETS Research Report Series, 2005
SCORIGHT is a very general computer program for scoring tests. It models tests that are made up of dichotomously or polytomously rated items or any kind of combination of the two through the use of a generalized item response theory (IRT) formulation. The items can be presented independently or grouped into clumps of allied items (testlets) or in…
Descriptors: Computer Assisted Testing, Statistical Analysis, Test Items, Bayesian Statistics
Cantor, Jeffrey A. – Performance and Instruction, 1990
Describes a process for evaluating the effectiveness of formal courses or segments of training instruction, including lessons or modules. Topics discussed include focusing on objectives; objective and test item evaluation; on-the-job training; task levels; transfer of training; presentation evaluation; and evaluation of instructional…
Descriptors: Behavioral Objectives, Course Evaluation, Evaluation Methods, Instructional Effectiveness