NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Does not meet standards1
Showing 811 to 825 of 1,159 results Save | Export
Peer reviewed Peer reviewed
Mehrens, William A. – Educational Measurement: Issues and Practice, 1986
The Presidential Address at the 1986 National Council on Measurement in Education Annual Meeting argues that measurement specialists have tended to set unrealistic aspirations for the role tests play. The conjunctive decision making model is discussed and the use of data in the conjunctive and compensatory decision making models is examined. (JAZ)
Descriptors: Cutting Scores, Decision Making, Educational Testing, Measurement Objectives
Peer reviewed Peer reviewed
Madaus, George F. – Educational Measurement: Issues and Practice, 1986
This reply to William A. Mehrens argues that test validity is the central issue in discussing the appropriate role of tests. It states that the procedures used to establish the validity of tests are inadequate because they depend primarily on content validity and not on construct and criterion validity. (JAZ)
Descriptors: Concurrent Validity, Construct Validity, Cutting Scores, Decision Making
Peer reviewed Peer reviewed
Mehrens, William A. – Educational Measurement: Issues and Practice, 1986
The President of the National Council on Measurement in Education replies to his critics. He argues that the concept of measurement error should not be used to make cut-scores more valid and that grade point averages have not been demonstrated to be valid indicators of teachers' subject matter competence. (LMO)
Descriptors: Cutting Scores, Grade Point Average, Licensing Examinations (Professions), Measurement Objectives
Peer reviewed Peer reviewed
Murphy, Kevin R.; And Others – Journal of Educational Psychology, 1984
Using 45 undergraduate evaluations of videotaped lectures, this study examined the effects of the purposes of rating on measures of accuracy in observing teacher behavior and in evaluating teacher performance. Results suggest that the purpose affects the way raters process behavioral information without necessarily affecting the general level of…
Descriptors: Behavior Rating Scales, Decision Making, Evaluation Utilization, Higher Education
Choppin, Bruce – Evaluation in Education: An International Review Series, 1985
Using the analogy of temperature measurement, the Rasch model is presented with arguments for its adoption as the basic scaling technique for achievement measures. Three extensions of the Rasch model for more complex testing are developed. Test development for the British national assessment program and the promise of item banking are also…
Descriptors: Academic Achievement, Achievement Tests, Educational Assessment, Item Banks
Peer reviewed Peer reviewed
Weisheit, Ralph A. – Journal of Alcohol and Drug Education, 1983
Argues that the design of current alcohol and drug education programs precludes their having a substantial impact on adolescent alcohol or drug use. Suggests that evaluators consider only limited aspects of these programs which leads to narrow definition of success and restricts input into program development and modification. (LLL)
Descriptors: Adolescents, Alcohol Education, Drug Education, Evaluation Criteria
Scott, Elspeth S. – 2002
Reflection and evaluation are key to improving the effectiveness of the school library resource center (LRC). The idea of measuring success may seem initially daunting, or even threatening, and be seen as yet another call on already limited time, but we should not be put off: much of the information required is already there, either explicit or…
Descriptors: Elementary Secondary Education, Evaluation Criteria, Evaluation Methods, Learning Resources Centers
Stenner, A. Jackson – 1996
This paper shows how the concept of general objectivity can be used to improve behavioral science measurement, particularly as it applies to the Lexile Framework, a tool for objectively measuring reading comprehension. It begins with a dialogue between a physicist and a psychometrician that details some of the differences between physical science…
Descriptors: Behavioral Sciences, Computation, Evaluation Research, Higher Education
Peer reviewed Peer reviewed
Rossi, Robert J.; Gilmartin, Kevin J. – Clearing House, 1979
This paper proposes using nontest indicators, along with academic tests, to assess school effects on youth development. Relationships between indicators and tests are explored and examples presented of nontest indicators of intellectual development, career development, and health and personal safety. Data sources for indicators are…
Descriptors: Achievement Rating, Career Development, Competence, Health
Sharpley, Chris. – Measurement and Evaluation in Guidance, 1981
Explains and demonstrates time-series statistical analyses with a case example. Argues graphs and nonparometrical statistical analyses are not valid methods for evaluating behavior change due to counseling. Suggests the use of time-series statistical analyses enables counselors to employ a reliable method of measuring change in counseling…
Descriptors: Accountability, Behavior Change, Counseling, Data Analysis
Peer reviewed Peer reviewed
Schermerhorn, Gerry R.; Williams, Reed G. – Educational Evaluation and Policy Analysis, 1979
A study to compare the impact of responsive (case study) and preordinate (questionnaire) approaches to program evaluation was conducted among 29 professional personnel affiliated with Southern Illinois University School of Medicine. Respondents preferred the responsive approach to the preordinate method; however, the responsive approach was more…
Descriptors: Case Studies, Cost Effectiveness, Educational Assessment, Evaluation Criteria
Peer reviewed Peer reviewed
Young, Richard; And Others – System, 1996
Describes the development of an English-as-a-Second-Language computer-adaptive test of reading comprehension. The article discusses the constraints that apply to Computer Adaptive Testing (CAT) and the advantages of CAT over conventional testing modalities. (29 references) (Author/CK)
Descriptors: Algorithms, College Students, Computer Assisted Testing, English (Second Language)
Sull, Theresa M. – Child Care Information Exchange, 2001
Discusses three steps for evaluating an early childhood education program: identifying the evaluation objective (formative or summative), determining criteria for making value judgments (effectiveness, efficiency, fairness, acceptability, and aesthetics), and gathering evidence (quantitative and qualitative). Discusses the use of assessment tools…
Descriptors: Day Care, Day Care Centers, Early Childhood Education, Evaluation Criteria
Peer reviewed Peer reviewed
Direct linkDirect link
Ball, Joanna; Pelton, Jennifer; Forehand, Rex; Long, Nicholas; Wallace, Scyatta A. – Journal of Child and Family Studies, 2004
We present an overview of the methodology employed in the Parents Matter! Program. Information on the following aspects of the program is presented: participant eligibility and recruitment; consenting procedures and administration of assessments; development and utilization of measures in the assessments; study design; intervention procedures;…
Descriptors: Research Methodology, Eligibility, Recruitment, Measurement Objectives
Peer reviewed Peer reviewed
Direct linkDirect link
Veugelers, Reinhilde – Measurement: Interdisciplinary Research and Perspectives, 2005
Since the use of bibliometric instruments has grown and will continue to grow in the future, the quality, availability, and accessibility of data on publications and citations is of tantamount importance. But equally important is a correct use of the data. This means that an important task of the bibliometric field is to highlight not only what…
Descriptors: Scientific Research, Bibliometrics, Test Validity, Economics
Pages: 1  |  ...  |  51  |  52  |  53  |  54  |  55  |  56  |  57  |  58  |  59  |  ...  |  78