NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 6 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Giada Spaccapanico Proietti; Mariagiulia Matteucci; Stefania Mignani; Bernard P. Veldkamp – Journal of Educational and Behavioral Statistics, 2024
Classical automated test assembly (ATA) methods assume fixed and known coefficients for the constraints and the objective function. This hypothesis is not true for the estimates of item response theory parameters, which are crucial elements in test assembly classical models. To account for uncertainty in ATA, we propose a chance-constrained…
Descriptors: Automation, Computer Assisted Testing, Ambiguity (Context), Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Samsudin, Mohd Ali; Chut, Thodsaphorn Som; Ismail, Mohd Erfy; Ahmad, Nur Jahan – EURASIA Journal of Mathematics, Science and Technology Education, 2020
The current assessment is demanding for a more personalised and less-time consuming testing environment. Computer Adaptive Testing (CAT) is seemed as a more effective alternative testing method in comparison to conventional test in meeting the current standard of assessment. This research reports on the calibration of the released Grade 8 Science…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Science Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Hamhuis, Eva; Glas, Cees; Meelissen, Martina – British Journal of Educational Technology, 2020
Over the last two decades, the educational use of digital devices, including digital assessments, has become a regular feature of teaching in primary education in the Netherlands. However, researchers have not reached a consensus about the so-called "mode effect," which refers to the possible impact of using computer-based tests (CBT)…
Descriptors: Handheld Devices, Elementary School Students, Grade 4, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Hauser, Carl; Thum, Yeow Meng; He, Wei; Ma, Lingling – Educational and Psychological Measurement, 2015
When conducting item reviews, analysts evaluate an array of statistical and graphical information to assess the fit of a field test (FT) item to an item response theory model. The process can be tedious, particularly when the number of human reviews (HR) to be completed is large. Furthermore, such a process leads to decisions that are susceptible…
Descriptors: Test Items, Item Response Theory, Research Methodology, Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
Claesgens, Jennifer; Scalise, Kathleen; Wilson, Mark; Stacy, Angelica – Science Education, 2009
Preliminary pilot studies and a field study show how a generalizable conceptual framework calibrated with item response modeling can be used to describe the development of student conceptual understanding in chemistry. ChemQuery is an assessment system that uses a framework of the key ideas in the discipline, called the Perspectives of Chemists,…
Descriptors: Scoring Rubrics, Chemistry, Item Response Theory, Comprehension
Liu, Xiufeng – 1994
Problems of validity and reliability of concept mapping are addressed by using item-response theory (IRT) models for scoring. In this study, the overall structure of students' concept maps are defined by the number of links, the number of hierarchies, the number of cross-links, and the number of examples. The study was conducted with 92 students…
Descriptors: Alternative Assessment, Computer Assisted Testing, Concept Mapping, Correlation