Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 3 |
Descriptor
Classification | 6 |
Decision Making | 6 |
Item Response Theory | 3 |
Test Items | 3 |
Accuracy | 2 |
Comparative Analysis | 2 |
Estimation (Mathematics) | 2 |
Models | 2 |
Scores | 2 |
Simulation | 2 |
Academic Standards | 1 |
More ▼ |
Source
Educational and Psychological… | 6 |
Author
Alley, William E. | 1 |
Baldwin, Peter | 1 |
Behuniak, Peter, Jr. | 1 |
Clauser, Jerome C. | 1 |
Darby, Melody M. | 1 |
De Corte, Wilfried | 1 |
Hambleton, Ronald K. | 1 |
Hauser, Carl | 1 |
He, Wei | 1 |
Huang, Hung-Yu | 1 |
Ma, Lingling | 1 |
More ▼ |
Publication Type
Journal Articles | 6 |
Reports - Research | 4 |
Reports - Descriptive | 1 |
Reports - Evaluative | 1 |
Education Level
Elementary Secondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
United States Medical… | 1 |
What Works Clearinghouse Rating
Huang, Hung-Yu – Educational and Psychological Measurement, 2023
The forced-choice (FC) item formats used for noncognitive tests typically develop a set of response options that measure different traits and instruct respondents to make judgments among these options in terms of their preference to control the response biases that are commonly observed in normative tests. Diagnostic classification models (DCMs)…
Descriptors: Test Items, Classification, Bayesian Statistics, Decision Making
Clauser, Jerome C.; Hambleton, Ronald K.; Baldwin, Peter – Educational and Psychological Measurement, 2017
The Angoff standard setting method relies on content experts to review exam items and make judgments about the performance of the minimally proficient examinee. Unfortunately, at times content experts may have gaps in their understanding of specific exam content. These gaps are particularly likely to occur when the content domain is broad and/or…
Descriptors: Scores, Item Analysis, Classification, Decision Making
Hauser, Carl; Thum, Yeow Meng; He, Wei; Ma, Lingling – Educational and Psychological Measurement, 2015
When conducting item reviews, analysts evaluate an array of statistical and graphical information to assess the fit of a field test (FT) item to an item response theory model. The process can be tedious, particularly when the number of human reviews (HR) to be completed is large. Furthermore, such a process leads to decisions that are susceptible…
Descriptors: Test Items, Item Response Theory, Research Methodology, Decision Making

De Corte, Wilfried – Educational and Psychological Measurement, 1998
An analytic procedure is presented that estimates the expected benefits of personnel classification decisions for which it is assumed that the available criterion estimates are both equi-correlated and equally valid, with equal quotas for the jobs, and the equal importance of all jobs. The numerical method developed for the estimation is…
Descriptors: Classification, Criteria, Data Analysis, Decision Making

Alley, William E.; Darby, Melody M. – Educational and Psychological Measurement, 1995
Because many applications of personnel selection and classification technology may require more than 10 assignment categories, a Monte Carlo simulation was conducted to extend the Brogden tables for estimations classifications and selection benefits from 10 to 500. The extended table would have higher utility. (SLD)
Descriptors: Classification, Cost Effectiveness, Decision Making, Estimation (Mathematics)

Behuniak, Peter, Jr.; And Others – Educational and Psychological Measurement, 1982
The validity of using proficiency test scores to make specific educational decisions is addressed. Ninth grade mathematics proficiency test scores were analyzed in both continuous and categorized form. The nature of the decisions to be made was examined as a factor in determining the categorization technique. (Author/PN)
Descriptors: Academic Standards, Classification, Competency Based Education, Cutting Scores