Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 5 |
Descriptor
Source
Educational Measurement:… | 2 |
Journal of Leadership… | 1 |
Journal of MultiDisciplinary… | 1 |
New Directions for Evaluation | 1 |
RAND Corporation | 1 |
Author
Anderson, Dan | 1 |
Gugiu, P. Cristian | 1 |
Hamilton, Laura S. | 1 |
Koretz, Daniel M. | 1 |
Lockwood, J. R. | 1 |
McCaffrey, Daniel F. | 1 |
Neustel, Sandra | 1 |
Raymond, Mark R. | 1 |
Rosch, David M. | 1 |
Schwartz, Leslie M. | 1 |
Wu, Margaret | 1 |
More ▼ |
Publication Type
Reports - Descriptive | 6 |
Journal Articles | 5 |
Education Level
Higher Education | 3 |
Elementary Secondary Education | 2 |
Postsecondary Education | 2 |
Adult Education | 1 |
Audience
Practitioners | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Yates, Brian T. – New Directions for Evaluation, 2012
The value of a program can be understood as referring not only to outcomes, but also to how those outcomes compare to the types and amounts of resources expended to produce the outcomes. Major potential mistakes and biases in assessing the worth of resources consumed, as well as the value of outcomes produced, are explored. Most of these occur…
Descriptors: Program Evaluation, Cost Effectiveness, Evaluation Criteria, Evaluation Problems
Rosch, David M.; Schwartz, Leslie M. – Journal of Leadership Education, 2009
As more institutions of higher education engage in the practice of leadership education, the effective assessment of these efforts lags behind due to a variety of factors. Without an intentional assessment plan, leadership educators are liable to make one or more of several common errors in assessing their programs and activities. This article…
Descriptors: Leadership Training, Administrator Education, College Outcomes Assessment, Program Evaluation
Raymond, Mark R.; Neustel, Sandra; Anderson, Dan – Educational Measurement: Issues and Practice, 2009
Examinees who take high-stakes assessments are usually given an opportunity to repeat the test if they are unsuccessful on their initial attempt. To prevent examinees from obtaining unfair score increases by memorizing the content of specific test items, testing agencies usually assign a different test form to repeat examinees. The use of multiple…
Descriptors: Test Results, Test Items, Testing, Aptitude Tests
Wu, Margaret – Educational Measurement: Issues and Practice, 2010
In large-scale assessments, such as state-wide testing programs, national sample-based assessments, and international comparative studies, there are many steps involved in the measurement and reporting of student achievement. There are always sources of inaccuracies in each of the steps. It is of interest to identify the source and magnitude of…
Descriptors: Testing Programs, Educational Assessment, Measures (Individuals), Program Effectiveness
Gugiu, P. Cristian – Journal of MultiDisciplinary Evaluation, 2007
The constraints of conducting evaluations in real-world settings often necessitate the implementation of less than ideal designs. Unfortunately, the standard method for estimating the precision of a result (i.e., confidence intervals [CI]) cannot be used for evaluative conclusions that are derived from multiple indicators, measures, and data…
Descriptors: Measurement, Evaluation Methods, Evaluation Problems, Error of Measurement
McCaffrey, Daniel F.; Lockwood, J. R.; Koretz, Daniel M.; Hamilton, Laura S. – RAND Corporation, 2003
Value-added modeling (VAM) to estimate school and teacher effects is currently of considerable interest to researchers and policymakers. Recent reports suggest that VAM demonstrates the importance of teachers as a source of variance in student outcomes. Policymakers see VAM as a possible component of education reform through improved teacher…
Descriptors: Educational Change, Accountability, Inferences, Models