NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 4 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Ren; Liu, Haiyan; Shi, Dexin; Jiang, Zhehan – Educational and Psychological Measurement, 2022
Assessments with a large amount of small, similar, or often repetitive tasks are being used in educational, neurocognitive, and psychological contexts. For example, respondents are asked to recognize numbers or letters from a large pool of those and the number of correct answers is a count variable. In 1960, George Rasch developed the Rasch…
Descriptors: Classification, Models, Statistical Distributions, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Bramley, Tom – Research in Mathematics Education, 2017
This study compared models of assessment structure for achieving differentiation across the range of examinee attainment in the General Certificate of Secondary Education (GCSE) examination taken by 16-year-olds in England. The focus was on the "adjacent levels" model, where papers are targeted at three specific non-overlapping ranges of…
Descriptors: Foreign Countries, Mathematics Education, Student Certification, Student Evaluation
Peer reviewed Peer reviewed
Rudner, Lawrence M. – Practical Assessment, Research & Evaluation, 2001
Provides and illustrates a method to compute the expected number of misclassifications of examinees using three-parameter item response theory and two state classifications (mastery or nonmastery). The method uses the standard error and the expected examinee ability distribution. (SLD)
Descriptors: Ability, Classification, Computation, Error of Measurement
Wang, Tianyou; And Others – 1996
M. J. Kolen, B. A. Hanson, and R. L. Brennan (1992) presented a procedure for assessing the conditional standard error of measurement (CSEM) of scale scores using a strong true-score model. They also investigated the ways of using nonlinear transformation from number-correct raw score to scale score to equalize the conditional standard error along…
Descriptors: Ability, Classification, Error of Measurement, Goodness of Fit