NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)1
Since 2006 (last 20 years)2
Location
Turkey1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 35 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dogan, Nuri; Hambleton, Ronald K.; Yurtcu, Meltem; Yavuz, Sinan – Cypriot Journal of Educational Sciences, 2018
Validity is one of the psychometric properties of the achievement tests. To determine the validity, one of the examination is item bias studies, which are based on differential item functioning (DIF) analyses and field experts' opinion. In this study, field experts were asked to estimate the DIF levels of the items to compare the estimations…
Descriptors: Test Bias, Comparative Analysis, Predictor Variables, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Clauser, Jerome C.; Clauser, Brian E.; Hambleton, Ronald K. – Applied Measurement in Education, 2014
The purpose of the present study was to extend past work with the Angoff method for setting standards by examining judgments at the judge level rather than the panel level. The focus was on investigating the relationship between observed Angoff standard setting judgments and empirical conditional probabilities. This relationship has been used as a…
Descriptors: Standard Setting (Scoring), Validity, Reliability, Correlation
Peer reviewed Peer reviewed
Hambleton, Ronald K.; Jaeger, Richard M.; Plake, Barbara S.; Mills, Craig – Applied Psychological Measurement, 2000
Reviews a number of promising methods for setting performance standards and discusses their strengths and weaknesses. Outlines some areas for future research that address the role of feedback to panelists and validation efforts for performance standards among other topics. (SLD)
Descriptors: Educational Assessment, Performance Based Assessment, Scoring, Standards
Peer reviewed Peer reviewed
Smith, I. Leon; Hambleton, Ronald K. – Educational Measurement: Issues and Practice, 1990
Implementing measurement specialists' ideas about content validity with licensure examinations and the problem of court litigation are discussed. Validity issues surfacing when sponsors of national licensure examinations conduct validity investigations are considered. Issues include local versus national focus on content validity, job analysis,…
Descriptors: Classification, Content Validity, Court Litigation, Job Analysis
Hambleton, Ronald K.; Traub, Ross E. – 1970
The purpose of this study was to determine the efficiency of the estimates of ability provided by the one-parameter logistic model as compared to the estimates provided by the more general two- and three-parameter models. Several tests were simulated with item parameters meeting the assumptions of either the two- or three-parameter model. For each…
Descriptors: Ability Identification, Data Collection, Models, Scoring
Hambleton, Ronald K.; Bourque, Mary Lyn – 1991
The National Assessment of Educational Progress (NAEP) is a congressionally mandated survey of educational achievement of American students in a variety of curriculum areas and of changes in that achievement over time. The National Assessment Governing Board (NAGB) has established new standards for reporting the results that determined three…
Descriptors: Achievement Rating, Construct Validity, Elementary Secondary Education, Grade 12
Hambleton, Ronald K.; And Others – J Educ Meas, 1970
Descriptors: Comparative Analysis, Evaluation Methods, Multiple Choice Tests, Test Reliability
Peer reviewed Peer reviewed
Hambleton, Ronald K. – Applied Psychological Measurement, 2000
Introduces the articles of this theme issue focusing on performance assessment methodology. Papers address: (1) merging item formats; (2) scoring models; (3) equating and linking; (4) generalizability theory; (5) standard setting methods; and (6) validity issues and methods. (SLD)
Descriptors: Equated Scores, Evaluation Methods, Generalizability Theory, Performance Based Assessment
Zenisky, April L.; Hambleton, Ronald K.; Sireci, Stephen G. – 2001
Measurement specialists routinely assume examinee responses to test items are independent of one another. However, previous research has shown that many contemporary tests contain item dependencies and not accounting for these dependencies leads to misleading estimates of item, test, and ability parameters. In this study, methods for detecting…
Descriptors: Ability, College Applicants, College Entrance Examinations, Higher Education
Hambleton, Ronald K.; Bollwark, John – 1991
The validity of results from international assessments depends on the correctness of the test translations. If the tests presented in one language are more or less difficult because of the manner in which they are translated, the validity of any interpretation of the results can be questioned. Many test translation methods exist in the literature,…
Descriptors: Cultural Differences, Educational Assessment, English, Foreign Countries
Peer reviewed Peer reviewed
Hambleton, Ronald K. – Educational and Psychological Measurement, 1987
This paper presents an algorithm for determining the number of items to measure each objective in a criterion-referenced test when testing time is fixed and when the objectives vary in their levels of importance, reliability, and validity. Results of four special applications of the algorithm are presented. (BS)
Descriptors: Algorithms, Behavioral Objectives, Criterion Referenced Tests, Test Construction
Hambleton, Ronald K.; Patsula, Liane – 2000
Whatever the purpose of test adaptation, questions arise concerning the validity of inferences from such adapted tests. This paper considers several advantages and disadvantages of adapting tests from one language and culture to another. The paper also reviews several sources of error or invalidity associated with adapting tests and suggests ways…
Descriptors: Cross Cultural Studies, Cultural Awareness, Quality of Life, Test Construction
Rovinelli, Richard J.; Hambleton, Ronald K. – 1976
Essential for an effective criterion-referenced testing program is a set of test items that are "valid" indicators of the objectives they have been designed to measure. Unfortunately, the complex matter of assessing item validity has received only limited attention from educational measurement specialists. One promising approach to the item…
Descriptors: Content Analysis, Criterion Referenced Tests, Data Collection, Evaluation Methods
Peer reviewed Peer reviewed
Traub, Ross E.; Hambleton, Ronald K. – Educational and Psychological Measurement, 1973
Descriptors: Grade 8, Guessing (Tests), Multiple Choice Tests, Pacing
Peer reviewed Peer reviewed
Hambleton, Ronald K.; And Others – Journal of Educational Measurement, 1983
A new method was developed to assist in the selection of a test length by utilizing computer simulation procedures and item response theory. A demonstration of the method presents results which address the influences of item pool heterogeneity matched to the objectives of interest and the method of item selection. (Author/PN)
Descriptors: Computer Programs, Criterion Referenced Tests, Item Banks, Latent Trait Theory
Previous Page | Next Page ยป
Pages: 1  |  2  |  3