NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
Laws, Policies, & Programs
Race to the Top1
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 25 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Benjamin M. Torsney; Sarah Rawls; Joseph I. Eisman; Catherine Pressimone Beckowski; Cheryl B. Torsney – Educational and Developmental Psychologist, 2025
Objective: The objective of this study was threefold: (a) to create a rubric for response complexity (RC), defined as an admixture of response length, grammatical diversity, categorisation, and sophistication; (b) to measure behavioural and cognitive engagement through students' written responses on a school-based written activity, and (c) to…
Descriptors: College Students, Learner Engagement, Responses, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Cummins, R. Glenn; Smith, David W.; Callison, Coy; Mukhtar, Saqib – Journal of Extension, 2018
The case study addressed in this article illustrates the value of continuous response measurement (CRM) for testing and refining messages produced for distribution to Extension audiences. We used CRM to evaluate the responses of Extension educators and Natural Resources Conservation Service technical service providers to a video describing…
Descriptors: Extension Education, Extension Agents, Case Studies, Audience Response
Peer reviewed Peer reviewed
Direct linkDirect link
St.Clair, Travis; Cook, Thomas D.; Hallberg, Kelly – American Journal of Evaluation, 2014
Although evaluators often use an interrupted time series (ITS) design to test hypotheses about program effects, there are few empirical tests of the design's validity. We take a randomized experiment on an educational topic and compare its effects to those from a comparative ITS (CITS) design that uses the same treatment group as the experiment…
Descriptors: Time, Evaluation Methods, Measurement Techniques, Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Talbot, Robert M., III – School Science and Mathematics, 2013
In order to evaluate the effectiveness of curricular or instructional innovations, researchers often attempt to measure change in students' conceptual understanding of the target subject matter. The measurement of change is therefore a critical endeavor. Often, this is accomplished through pre-post testing using an assessment such as a…
Descriptors: Measurement Techniques, Item Response Theory, Scientific Concepts, Physics
Peer reviewed Peer reviewed
Direct linkDirect link
Payne, Pamela B.; McDonald, Daniel A. – Journal of Extension, 2015
Community-based education programs must demonstrate effectiveness to various funding sources. The pilot study reported here (funded by CYFAR, NIFA, USDA award #2008-41520-04810) had the goal of determining if state level programs with varied curriculum could use a common evaluation tool to demonstrate efficacy. Results in parenting and youth…
Descriptors: Community Programs, Extension Education, State Programs, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Witzig, Stephen B.; Rebello, Carina M.; Siegel, Marcelle A.; Freyermuth, Sharyn K.; Izci, Kemal; McClure, Bruce – Research in Science Education, 2014
Identifying students' conceptual scientific understanding is difficult if the appropriate tools are not available for educators. Concept inventories have become a popular tool to assess student understanding; however, traditionally, they are multiple choice tests. International science education standard documents advocate that assessments…
Descriptors: Test Construction, Scientific Concepts, Concept Formation, Knowledge Level
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Cutumisu, Maria; Blair, Kristen P.; Chin, Doris B.; Schwartz, Daniel L. – Journal of Learning Analytics, 2015
We introduce one instance of a game-based assessment designed to measure students' self-regulated learning choices. We describe our overarching measurement strategy and we present "Posterlet", an assessment game in which students design posters and learn graphic design principles from feedback. We designed "Posterlet" to assess…
Descriptors: Evaluation Methods, Games, Measurement Techniques, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Frost, Jørgen; Ottem, Ernst; Snow, Catherine E.; Hagtvet, Bente E.; Lyster, Solveig Alma Helaas; White, Claire – Scandinavian Journal of Educational Research, 2014
Two ways of measuring change are presented and compared: A conventional "change score", defined as the difference between scores before and after an interim period, and a process-oriented approach focusing on detailed analysis of conceptually defined response patterns. The validity of the two approaches was investigated. Vocabulary…
Descriptors: Vocabulary, Scores, Knowledge Level, Vocabulary Development
Studer, Cassandra; Junker, Brian; Chan, Helen – Society for Research on Educational Effectiveness, 2012
The authors aimed to incorporate learning into the cognitive assessment framework that exists for static assessment data. In order to accomplish this, they derive a common likelihood function for dynamic models and introduce Parameter Driven Process for Change + Cognitive Diagnosis Model (PDPC + CDM), a dynamic model which tracks learning…
Descriptors: Foreign Countries, Data Analysis, Cognitive Measurement, Measurement Techniques
Isenberg, Eric; Hock, Heinrich – Mathematica Policy Research, Inc., 2012
This report describes the value-added models used as part of teacher evaluation systems in the District of Columbia Public Schools (DCPS) and in eligible DC charter schools participating in "Race to the Top." The authors estimated: (1) teacher effectiveness in DCPS and eligible DC charter schools during the 2011-2012 school year; and (2)…
Descriptors: Teacher Evaluation, Value Added Models, Public Schools, Charter Schools
Lanigan, Mary L. – Performance Improvement Quarterly, 2008
Researchers have linked self-efficacy to performance, but they have not investigated the relationship between self-efficacy and knowledge and skills tests within a training evaluation setting, which is the main purpose of this study. Additionally, researchers have acknowledged self-efficacy scores may be distorted as a result of assessors…
Descriptors: Self Efficacy, Program Effectiveness, Measures (Individuals), Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Brydges, Ryan; Carnahan, Heather; Dubrowski, Adam – Advances in Health Sciences Education, 2009
Directed self-guidance, whereby trainees independently practice a skill-set in a structured setting, may be an effective technique for novice training. Currently, however, most evaluation methods require an expert to be present during practice. The study aim was to determine if absolute symmetry error, a clinically important measure that can be…
Descriptors: Feedback (Response), Pretests Posttests, Evaluation Methods, Trainees
Hoffman, David H. – Educational Technology, 1974
Descriptors: Evaluation, Evaluation Methods, Individualized Instruction, Individualized Programs
Peer reviewed Peer reviewed
Peck, Laura R. – American Journal of Evaluation, 2003
Proposes a methodology for analyzing the impacts of social programs on previously unexamined subgroups. The approach estimates the impact of programs on subgroups identified by a postreatment choice while maintaining the integrity of the experimental research design. (SLD)
Descriptors: Evaluation Methods, Experiments, Measurement Techniques, Outcomes of Treatment
O'Neal, Marcia R.; And Others – 1983
A difficulty associated with the use of Golub and Frederick's syntactic density score was the time required in hand tabulation. This drawback was resolved with the development by Kidder of a computer program which calculates a syntactic density score for writing samples. The purpose of this study was to examine the sensitivity of the Syntactic…
Descriptors: Computer Programs, Evaluation Methods, Measurement Techniques, Pretests Posttests
Previous Page | Next Page »
Pages: 1  |  2