NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers3
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 24 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Selcuk Acar; Denis Dumas; Peter Organisciak; Kelly Berthiaume – Grantee Submission, 2024
Creativity is highly valued in both education and the workforce, but assessing and developing creativity can be difficult without psychometrically robust and affordable tools. The open-ended nature of creativity assessments has made them difficult to score, expensive, often imprecise, and therefore impractical for school- or district-wide use. To…
Descriptors: Thinking Skills, Elementary School Students, Artificial Intelligence, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Attali, Yigal – Educational Measurement: Issues and Practice, 2019
Rater training is an important part of developing and conducting large-scale constructed-response assessments. As part of this process, candidate raters have to pass a certification test to confirm that they are able to score consistently and accurately before they begin scoring operationally. Moreover, many assessment programs require raters to…
Descriptors: Evaluators, Certification, High Stakes Tests, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Jølle, Lennart; Skar, Gustaf B. – Scandinavian Journal of Educational Research, 2020
This paper reports findings from a project called "The National Panel of Raters" (NPR) that took place within a writing test programme in Norway (2010-2016). A recent research project found individual differences between the raters in the NPR. This paper reports results from an explorative follow up-study where 63 NPR members were…
Descriptors: Foreign Countries, Validity, Scoring, Program Descriptions
Peer reviewed Peer reviewed
Direct linkDirect link
Treiber, Jeanette; Kipke, Robin; Satterlund, Travis; Cassady, Diana – International Journal of Training and Development, 2013
Nearly all private, government and non-governmental organizations that receive government funding to run social or health promotion programs in the United States are required to conduct program evaluations and to report findings to the funding agency. Reports are usually due at the end of a funding cycle and they may or may not have an influence…
Descriptors: Public Agencies, State Government, Financial Support, State Aid
Bridgeford, Nancy J., Comp. – 1981
This monograph is intended to assist educators faced with the task of selecting and developing sound writing assessment procedures. Provided are: (1) practical background information on direct writing assessment procedures, alternative approaches, time requirements, advantages and disadvantages of each assessment approach, and (2) identification…
Descriptors: Consultants, Evaluation Methods, Evaluators, Scoring
Myford, Carol M.; And Others – 1996
Developing scoring rubrics to evaluate student work was studied, concentrating on the use of intermediate points in rating scales. How scales that allow for intermediate points between defined categories should be constructed and used was explored. In the recent National Assessment of Educational Progress (NAEP) visual arts field test, researchers…
Descriptors: Evaluators, Rating Scales, Scoring, Scoring Rubrics
Gray, James; And Others – 1982
Five studies of holistic writing assessment procedures examined interactive relationships of the participants, processes, and products of writing assessment episodes. The first study examined practices in designing writing test prompts. The second study investigated the effects of variation in the specification of audience in a writing test prompt…
Descriptors: Data Collection, Evaluators, Holistic Evaluation, Longitudinal Studies
Peer reviewed Peer reviewed
Brown, Anne – Language Testing, 1995
This article explores the effect of raters' background on assessments made in an occupation-specific oral language test, the Japanese Language Test for Tour Guides. Assessments of 51 test candidates made by 33 assessors were compared in order to determine what effect background has on assessments made on both linguistic and "real-world"…
Descriptors: Comparative Analysis, Evaluators, Japanese, Language Tests
PDF pending restoration PDF pending restoration
Daiker, Donald A.; Grogan, Nedra – 1985
The role of sample papers (i.e., anchor papers, prototypes, range-finders) in holistic evaluation of writing is discussed. When, where, and how many sample papers are to be selected, and who should perform the selection are covered. The process of sample selection should proceed as follows: (1) a general reading of papers by committee members to…
Descriptors: Advanced Placement, Essay Tests, Evaluators, Higher Education
Peer reviewed Peer reviewed
Akpe, C. S. – Studies in Educational Evaluation, 1994
A teacher appraisal instrument was developed and validated for a system of teacher evaluation in Rivers State (Nigeria). The developed Likert scale for teacher scoring was tested for 410 teachers evaluated by 16 headteacher evaluators. Initial findings suggest areas in which the appraisal process and instrument can be improved. (SLD)
Descriptors: Administrators, Case Studies, Educational Practices, Elementary Education
Bolton, Dale L. – 1990
Theory and implications for methods of assessing administrative performance in simulated exercises are presented. The rationale is given for the following: (1) developing simulated exercises; (2) measuring behaviors exhibited during the exercises; (3) training evaluators; (4) combining information across exercises; and (5) storing and retrieving…
Descriptors: Administrator Evaluation, Concept Formation, Educational Assessment, Elementary Secondary Education
Schafer, William D. – 2000
The Department of Measurement, Statistics, and Evaluation (EDMS) at the University of Maryland is working to develop Master's degree programs that are oriented around developing assessment professionals for work in applied settings. Two fundamentally different sets of experiences are being developed: (1) assessment development, administration, and…
Descriptors: Data Analysis, Educational Assessment, Educational Testing, Evaluation Methods
Friedman, Charles B. – 1989
In most judgmental methods for setting performance standards, content experts evaluate test items based on a conceptualization of a borderline examinee or a group of examinees. An alternative method is proposed by which judges evaluate items based on the degree of criticality regardless of content area. This classification of items serves to link…
Descriptors: Cutting Scores, Evaluators, Health Occupations, Higher Education
Peer reviewed Peer reviewed
Clauser, Brian E.; Ross, Linette P.; Clyman, Stephen G.; Rose, Kathie M.; Margolis, Melissa J.; Nungester, Ronald J.; Piemme, Thomas E.; Chang, Lucy; El-Bayoumi, Gigi; Malakoff, Gary L.; Pincetl, Pierre S. – Applied Measurement in Education, 1997
Describes an automated scoring algorithm for a computer-based simulation examination of physicians' patient-management skills. Results with 280 medical students show that scores produced using this algorithm are highly correlated to actual clinician ratings. Scores were also effective in discriminating between case performance judged passing or…
Descriptors: Algorithms, Computer Assisted Testing, Computer Simulation, Evaluators
Kaiser, Paul D.; Brull, Harry – 1994
The design, administration, scoring, and results of the 1993 New York State Correctional Captain Examination are described. The examination was administered to 405 candidates. As in previous Sergeant and Lieutenant examinations, candidates also completed latent image written simulation problems and open/closed book multiple choice test components.…
Descriptors: Competitive Selection, Correctional Rehabilitation, Decision Making, Educational Innovation
Previous Page | Next Page »
Pages: 1  |  2