NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 6 results Save | Export
May, Henry; Blackman, Horatio; Van Horne, Sam; Tilley, Katherine; Farley-Ripple, Elizabeth N.; Shewchuk, Samantha; Agboh, Darren; Micklos, Deborah Amsden – Center for Research Use in Education, 2022
In this technical report, the Center for Research Use in Education (CRUE) presents the methodological design of a large-scale quantitative investigation of research use by school-based practitioners through the "Survey of Evidence in Education for Schools (SEE-S)." It documents the major technical aspects of the development of SEE-S,…
Descriptors: Surveys, Schools, Educational Research, Research Utilization
Peer reviewed Peer reviewed
Direct linkDirect link
Rupp, André A. – Applied Measurement in Education, 2018
This article discusses critical methodological design decisions for collecting, interpreting, and synthesizing empirical evidence during the design, deployment, and operational quality-control phases for automated scoring systems. The discussion is inspired by work on operational large-scale systems for automated essay scoring but many of the…
Descriptors: Design, Automation, Scoring, Test Scoring Machines
Secolsky, Charles, Ed.; Denison, D. Brian, Ed. – Routledge, Taylor & Francis Group, 2011
Increased demands for colleges and universities to engage in outcomes assessment for accountability purposes have accelerated the need to bridge the gap between higher education practice and the fields of measurement, assessment, and evaluation. The "Handbook on Measurement, Assessment, and Evaluation in Higher Education" provides higher…
Descriptors: Generalizability Theory, Higher Education, Institutional Advancement, Teacher Effectiveness
Peer reviewed Peer reviewed
Crocker, Linda; And Others – Journal of Educational Measurement, 1988
Using generalizability theory as a framework, the problem of assessing the content validity of standardized achievement tests is considered. Four designs to assess test-item fit to a curriculum are described, and procedures for determining the optimal number of raters and schools in a content-validation decision-making study are considered. (TJH)
Descriptors: Achievement Tests, Content Validity, Decision Making, Elementary Education
Kim, Yang Boon; Lee, Jong Sung – 1990
The empirical validity of generalizability theory was investigated by applying two three-facet designs to data obtained in 1988 from administration of the Scientific Thinking and Research Skill Test (STRST). The decision validity of the STRST was also examined. Subjects were 125 fifth-grade and 125 sixth-grade students who were administered the…
Descriptors: Analysis of Variance, Decision Making, Elementary School Students, Generalizability Theory
Capie, William; Cronin, Linda – 1986
This paper assesses the credibility of a single total instrument score and various logical sub-scores derived from a series of summative judgments about the quality of teaching performance. The objectives were to compare the generalizability of alternative Teacher Performance Assessment Instrument (TPAI) scores, to compare the dependability of…
Descriptors: Academic Achievement, Correlation, Decision Making, Evaluation Criteria