NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Donoghue, John R.; McClellan, Catherine A.; Hess, Melinda R. – ETS Research Report Series, 2022
When constructed-response items are administered for a second time, it is necessary to evaluate whether the current Time B administration's raters have drifted from the scoring of the original administration at Time A. To study this, Time A papers are sampled and rescored by Time B scorers. Commonly the scores are compared using the proportion of…
Descriptors: Item Response Theory, Test Construction, Scoring, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Giada Spaccapanico Proietti; Mariagiulia Matteucci; Stefania Mignani; Bernard P. Veldkamp – Journal of Educational and Behavioral Statistics, 2024
Classical automated test assembly (ATA) methods assume fixed and known coefficients for the constraints and the objective function. This hypothesis is not true for the estimates of item response theory parameters, which are crucial elements in test assembly classical models. To account for uncertainty in ATA, we propose a chance-constrained…
Descriptors: Automation, Computer Assisted Testing, Ambiguity (Context), Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Silva, R. M.; Guan, Y.; Swartz, T. B. – Journal on Efficiency and Responsibility in Education and Science, 2017
This paper attempts to bridge the gap between classical test theory and item response theory. It is demonstrated that the familiar and popular statistics used in classical test theory can be translated into a Bayesian framework where all of the advantages of the Bayesian paradigm can be realized. In particular, prior opinion can be introduced and…
Descriptors: Item Response Theory, Bayesian Statistics, Test Construction, Markov Processes
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Abed, Eman Rasmi; Al-Absi, Mohammad Mustafa; Abu shindi, Yousef Abdelqader – International Education Studies, 2016
The purpose of the present study is developing a test to measure the numerical ability for students of education. The sample of the study consisted of (504) students from 8 universities in Jordan. The final draft of the test contains 45 items distributed among 5 dimensions. The results revealed that acceptable psychometric properties of the test;…
Descriptors: Foreign Countries, Item Response Theory, Numeracy, Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Bramley, Tom – Research in Mathematics Education, 2017
This study compared models of assessment structure for achieving differentiation across the range of examinee attainment in the General Certificate of Secondary Education (GCSE) examination taken by 16-year-olds in England. The focus was on the "adjacent levels" model, where papers are targeted at three specific non-overlapping ranges of…
Descriptors: Foreign Countries, Mathematics Education, Student Certification, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Kirnan, Jean Powell; Edler, Erin; Carpenter, Allison – International Journal of Testing, 2007
The range of response options has been shown to influence the answers given in self-report instruments that measure behaviors ranging from television viewing to sexual partners. The current research extends this line of inquiry to 36 quantitative items extracted from a biographical inventory used in personnel selection. A total of 92…
Descriptors: Personnel Selection, Biographical Inventories, Testing, Self Disclosure (Individuals)
Peer reviewed Peer reviewed
Emons, Wilco H. M.; Meijer, Rob R.; Sijtsma, Klaas – Applied Psychological Measurement, 2002
Studied whether the theoretical sampling distribution of the U3 person-fit statistic is in agreement with the simulated sampling distribution under different item response theory models and varying item and test characteristics. Simulation results suggest that the use of standard normal deviates for the standardized version of the U3 statistic may…
Descriptors: Item Response Theory, Sampling, Simulation, Statistical Distributions
Peer reviewed Peer reviewed
van der Linden, Wim J.; Luecht, Richard M. – Psychometrika, 1998
Derives a set of linear conditions of item-response functions that guarantees identical observed-score distributions on two test forms. The conditions can be added as constraints to a linear programming model for test assembly. An example illustrates the use of the model for an item pool from the Law School Admissions Test (LSAT). (SLD)
Descriptors: Equated Scores, Item Banks, Item Response Theory, Linear Programming
van der Linden, Wim J.; Luecht, Richard M. – 1994
An optimization model is presented that allows test assemblers to control the shape of the observed-score distribution on a test for a population with a known ability distribution. An obvious application is for item response theory-based test assembly in programs where observed scores are reported and operational test forms are required to produce…
Descriptors: Ability, Foreign Countries, Heuristics, Item Response Theory
Peer reviewed Peer reviewed
Camilli, Gregory – Applied Psychological Measurement, 1992
A mathematical model is proposed to describe how group differences in distributions of abilities, which are distinct from the target ability, influence the probability of a correct item response. In the multidimensional approach, differential item functioning is considered a function of the educational histories of the examinees. (SLD)
Descriptors: Ability, Comparative Analysis, Equations (Mathematics), Factor Analysis