NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 28 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Harrison, George M. – Journal of Educational Measurement, 2015
The credibility of standard-setting cut scores depends in part on two sources of consistency evidence: intrajudge and interjudge consistency. Although intrajudge consistency feedback has often been provided to Angoff judges in practice, more evidence is needed to determine whether it achieves its intended effect. In this randomized experiment with…
Descriptors: Interrater Reliability, Standard Setting (Scoring), Cutting Scores, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
Clauser, Jerome C.; Margolis, Melissa J.; Clauser, Brian E. – Journal of Educational Measurement, 2014
Evidence of stable standard setting results over panels or occasions is an important part of the validity argument for an established cut score. Unfortunately, due to the high cost of convening multiple panels of content experts, standards often are based on the recommendation from a single panel of judges. This approach implicitly assumes that…
Descriptors: Standard Setting (Scoring), Generalizability Theory, Replication (Evaluation), Cutting Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Van Nijlen, Daniel; Janssen, Rianne – Journal of Educational Measurement, 2008
Essential for the validity of the judgments in a standard-setting study is that they follow the implicit task assumptions. In the Angoff method, judgments are assumed to be inversely related to the difficulty of the items; contrasting-groups judgments are assumed to be positively related to the ability of the students. In the present study,…
Descriptors: Standard Setting (Scoring), Validity, Regression (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Clauser, Brian E.; Mee, Janet; Baldwin, Su G.; Margolis, Melissa J.; Dillon, Gerard F. – Journal of Educational Measurement, 2009
Although the Angoff procedure is among the most widely used standard setting procedures for tests comprising multiple-choice items, research has shown that subject matter experts have considerable difficulty accurately making the required judgments in the absence of examinee performance data. Some authors have viewed the need to provide…
Descriptors: Standard Setting (Scoring), Program Effectiveness, Expertise, Health Personnel
Peer reviewed Peer reviewed
Norcini, John J.; And Others – Journal of Educational Measurement, 1988
Two studies of medical certification examinations were undertaken to assess standard setting using Angoff's Method. Results indicate that (1) specialization within broad content areas does not affect an expert's estimates of the performance of the borderline group; and (2) performance data should be provided during the standard-setting process.…
Descriptors: Certification, Cutting Scores, Licensing Examinations (Professions), Medicine
Peer reviewed Peer reviewed
Norcini, John J.; And Others – Journal of Educational Measurement, 1988
Multiple matrix sampling is applied to a variation of Angoff's standard setting method. Thirty-six experts (internists) and 190 items were divided into five groups, and borderline examinee performance was estimated. There was some variability in the cutting scores produced by the individual groups, but various components were well estimated. (SLD)
Descriptors: Cutting Scores, Minimum Competency Testing, Physicians, Sampling
Peer reviewed Peer reviewed
Plake, Barbara S.; Impara, James C. – Journal of Educational Measurement, 1997
Two studies of variations of the Angoff method (W. Angoff, 1971) involving nine elementary school teachers in each case compared a yes/no estimation with a proportion correct estimation for setting cut scores. Both methods yielded essentially equal cut scores, but judges found the yes/no method easier to implement. (SLD)
Descriptors: Cutting Scores, Elementary Education, Elementary School Teachers, Estimation (Mathematics)
Peer reviewed Peer reviewed
Kane, Michael T. – Journal of Educational Measurement, 1987
The use of item response theory models for analyzing the results of judgmental standard setting studies (the Angoff technique) for establishing minimum pass levels is discussed. A comparison of three methods indicates the traditional approach may not be best. A procedure based on generalizability theory is suggested. (GDC)
Descriptors: Comparative Analysis, Cutting Scores, Generalizability Theory, Latent Trait Theory
Peer reviewed Peer reviewed
Mills, Craig N. – Journal of Educational Measurement, 1983
This study compares the results obtained using the Angoff, borderline group, and contrasting groups methods of determining performance standards. Congruent results were obtained from the Angoff and contrasting groups methods for several test forms. Borderline group standards were not similar to standards obtained with other methods. (Author/PN)
Descriptors: Comparative Analysis, Criterion Referenced Tests, Cutting Scores, Standard Setting (Scoring)
Peer reviewed Peer reviewed
Livingston, Samuel A. – Journal of Educational Measurement, 1982
To set a standard on the "beardedness" test (see TM 507 062) the probability that a student with a specific score will be judged as bearded must be estimated for each test score. To get an unbiased estimate of that probability, a representative sample of students at each test score level must be chosen. (BW)
Descriptors: Cutting Scores, Evaluation Methods, Graduation Requirements, Minimum Competency Testing
Peer reviewed Peer reviewed
Rowley, Glenn L. – Journal of Educational Measurement, 1982
Livingston's (TM 507 218) response to Rowley (TM 507 062) is compared with the original Zieky and Livingston formulation of the Contrasting Groups Method of setting standards. (BW)
Descriptors: Cutting Scores, Evaluation Methods, Graduation Requirements, Minimum Competency Testing
Peer reviewed Peer reviewed
Beuk, Cees H. – Journal of Educational Measurement, 1984
A systematic method for compromise between absolute and relative examination standards is proposed. The passing score is assumed to be related to expected pass rate through a simple linear function. Results define a function relating the percentage of successful candidates given a specified passing score to the passing score. (Author/DWH)
Descriptors: Achievement Tests, Cutting Scores, Foreign Countries, Mathematical Models
Peer reviewed Peer reviewed
Gross, Leon J. – Journal of Educational Measurement, 1982
Addressing Glass' argument (EJ 198 842) that a lack of interrelater reliability is an inherent deficiency in the Nedelsky technique, poor rater training and the need for a group decision procedure are presented as standard setting problems. (CM)
Descriptors: Academic Standards, Criterion Referenced Tests, Cutting Scores, Evaluation Criteria
Peer reviewed Peer reviewed
Cross, Lawrence H.; And Others – Journal of Educational Measurement, 1984
Minimum standards were established for the National Teacher Examinations (NTE) by teacher educators instructed in the use of the Angoff, Nedelsky, or Jaeger procedures. The anticipated failure rates, the psychometric characteristics of the ratings, and other factors suggest the Angoff procedure yields the most defensible standards for the NTE area…
Descriptors: Analysis of Variance, Cutting Scores, Evaluation Methods, Occupational Tests
Peer reviewed Peer reviewed
Van der Linden, Wim J. – Journal of Educational Measurement, 1982
An ignored aspect of standard setting, namely the possibility that Angoff or Nedelsky judges specify inconsistent probabilities (e.g., low probabilities for easy items but large probabilities for hard items) is explored. A latent trait method is proposed to estimate such misspecifications, and an index of consistency is defined. (Author/PN)
Descriptors: Cutting Scores, Latent Trait Theory, Mastery Tests, Mathematical Models
Previous Page | Next Page ยป
Pages: 1  |  2