NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)0
Since 2006 (last 20 years)5
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 28 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Mitchell, Joshua J.; Ryder, Andrew J. – New Directions for Student Services, 2013
Dashboard systems are increasingly popular as assessment and performance management tools in higher education. This chapter examines the use of dashboards in student affairs, including examples of key indicators and considerations for developing and implementing these tools. The chapter begins with an overview of the origins of dashboards, from…
Descriptors: Evaluation Methods, Higher Education, Student Personnel Services, Information Management
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Supovitz, Jonathan; Foley, Ellen; Mishook, Jacob – Education Policy Analysis Archives, 2012
Data have long been considered a key factor in organizational decision-making (Simon, 1955; Lindblom & Cohen, 1979). Data offer perspective, guidance, and insights that inform policy and practice (Newell & Simon, 1972; Kennedy, 1984). Recently, education policymakers have invested in the use of data for organizational improvement in states…
Descriptors: Academic Achievement, Politics of Education, Educational Indicators, Management Information Systems
Peer reviewed Peer reviewed
Direct linkDirect link
Loh, Christian Sebastian – International Journal of Virtual and Personal Learning Environments, 2013
Today's economic situation demands that learning organizations become more diligent in their business dealings to reduce cost and increase bottom line for survival. While there are many champions and proponents claiming that game-based learning (GBL) is sure to improve learning, researchers have, thus far, been unable to (re)produce concrete,…
Descriptors: Investment, Outcomes of Education, Educational Games, Instructional Improvement
Peer reviewed Peer reviewed
Direct linkDirect link
Parmer, Sondra M.; Parmer, Greg; Struempler, Barb – Journal of Extension, 2012
Using clickers to gauge student understanding in large classrooms is well documented. Less well known is the effectiveness of using clickers with youth for test taking in large-scale Extension programs. This article describes the benefits and challenges of collecting evaluation data using clickers with a third-grade population participating in a…
Descriptors: Data Collection, Audience Response Systems, Research Methodology, Technology Uses in Education
Barley, Zoe; Wegner, Sandra K. – Mid-continent Research for Education and Learning (McREL), 2009
In state needs assessment meetings held in late June 2008, the leadership of two Central Region state departments of education, South Dakota and Missouri, asked Regional Educational Laboratory (REL) Central to determine how the states in the region evaluate their supplemental educational service (SES) providers. Specifically, these SEA staff…
Descriptors: Data Collection, Evaluation Criteria, Evaluation Methods, State Policy
Peer reviewed Peer reviewed
Divorski, S.; Scheirer, M. A. – Evaluation and Program Planning, 2001
Identified methods that six federal agencies reported using to verify and validate performance data used in complying with the requirements of the Government Performance and Results Act. Developed a framework containing four major approaches to assessing and improving data quality. (SLD)
Descriptors: Data Collection, Evaluation Methods, Federal Government, Performance Based Assessment
King Research, Inc., Rockville, MD. – 1982
This how-to-do-it guide for Oklahoma public libraries to use in evaluating their performance defines the individual performance measures, explains the data collection and analysis procedures, shows example calculations, and discusses the analysis and presentation of the performance measures. The performance measures are presented in terms of the…
Descriptors: Data Analysis, Data Collection, Evaluation Criteria, Evaluation Methods
Brown, Mark G. – Training and Development Journal, 1980
Presents an evaluation design which, because of its simplicity and the ease with which the data collected may be analyzed, is ideal for use in the "real world" as a practical method for evaluating the impact of training on employee behavior. (CT)
Descriptors: Data Analysis, Data Collection, Employee Attitudes, Evaluation Methods
Peer reviewed Peer reviewed
Harrison, William C.; Peterson, Kent D. – Urban Review, 1986
Describes the characteristics of a statewide principal evaluation system. Even when an evaluation system is detailed, specific, and statewide, inconsistencies develop because of the special nature of principals' work. The system breaks down in the sampling of performance and outputs, and in the communication of negative feedback. (PS)
Descriptors: Administrator Evaluation, Data Collection, Evaluation Criteria, Evaluation Methods
Peterson, Kenneth D. – 2000
This handbook advocates a new approach to teacher evaluation as a cooperative effort undertaken by a group of professionals. Part 1 describes the need for changed teacher evaluation, and part 2 outlines ways to use multiple data sources, including student and parent reports, peer review of materials, student achievement results, teacher tests,…
Descriptors: Data Collection, Elementary Secondary Education, Evaluation Methods, Faculty Evaluation
Peer reviewed Peer reviewed
Carliner, Saul – Performance Improvement, 1997
Proposes a four-level model for adapting the Kirkpatrick model of training evaluation to suit technical manuals and services assessing: (1) user satisfaction; (2) user performance; (3) client performance; and (4) client satisfaction. Discusses assessing of the value of work, limitations in evaluating technical communication products, and the…
Descriptors: Data Collection, Employment Practices, Evaluation Criteria, Evaluation Methods
Rasor, Richard A.; And Others – 1983
Guidelines are presented for implementing the self-evaluation model for instructional programs developed at American River College (ARC). After stating the reasons for conducting program reviews, the paper describes ARC's model, which uses a mini-accreditation approach to assess the extent of goal attainment within a broad category of activities…
Descriptors: Behavioral Objectives, Community Colleges, Data Collection, Evaluation Criteria
Stewart, E. Elizabeth – 1981
Context effects are defined as being influences on test performance associated with the content of successively presented test items or sections. Four types of context effects are identified: (1) direct context effects (practice effects) which occur when performance on items is affected by the examinee having been exposed to similar types of…
Descriptors: Context Effect, Data Collection, Error of Measurement, Evaluation Methods
King Research, Inc., Rockville, MD. – 1982
Intended for use by state library officials with policy responsibilities, this report presents a process for determining state levels of adequacy of public library services, measuring the current statewide performance, and planning statewide strategies for improving performance. Several facets of library service and resources are addressed, and…
Descriptors: Data Collection, Evaluation Criteria, Evaluation Methods, Guidelines
Bartek, Mary M. – Understanding Our Gifted, 2003
Using a sci-fi matchmaking scenario to illustrate the fallibility of technology, this article discusses the practice of reducing a student to a series of test scores for gifted identification. The limits of testing are addressed, and student performance and behavior are urged as additional categories for identifying aptitude and achievement.…
Descriptors: Ability Identification, Academic Achievement, Classroom Observation Techniques, Data Collection
Previous Page | Next Page ยป
Pages: 1  |  2