NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Counselors1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 59 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Stemler, Steven E.; Naples, Adam – Practical Assessment, Research & Evaluation, 2021
When students receive the same score on a test, does that mean they know the same amount about the topic? The answer to this question is more complex than it may first appear. This paper compares classical and modern test theories in terms of how they estimate student ability. Crucial distinctions between the aims of Rasch Measurement and IRT are…
Descriptors: Item Response Theory, Test Theory, Ability, Computation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Eray Selçuk; Ergül Demir – International Journal of Assessment Tools in Education, 2024
This research aims to compare the ability and item parameter estimations of Item Response Theory according to Maximum likelihood and Bayesian approaches in different Monte Carlo simulation conditions. For this purpose, depending on the changes in the priori distribution type, sample size, test length, and logistics model, the ability and item…
Descriptors: Item Response Theory, Item Analysis, Test Items, Simulation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bruno D. Zumbo – International Journal of Assessment Tools in Education, 2023
In line with the journal volume's theme, this essay considers lessons from the past and visions for the future of test validity. In the first part of the essay, a description of historical trends in test validity since the early 1900s leads to the natural question of whether the discipline has progressed in its definition and description of test…
Descriptors: Test Theory, Test Validity, True Scores, Definitions
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Basman, Munevver – International Journal of Assessment Tools in Education, 2023
To ensure the validity of the tests is to check that all items have similar results across different groups of individuals. However, differential item functioning (DIF) occurs when the results of individuals with equal ability levels from different groups differ from each other on the same test item. Based on Item Response Theory and Classic Test…
Descriptors: Test Bias, Test Items, Test Validity, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kirya, Kent Robert; Mashood, Kalarattu Kandiyi; Yadav, Lakhan Lal – Journal of Turkish Science Education, 2022
In this study, we administered and evaluated circular motion concept question items with a view to developing an inventory suitable for the Ugandan context. Before administering the circular concept items, six physics experts and ten undergraduate physics students carried out the face and content validation. One hundred eighteen undergraduate…
Descriptors: Motion, Scientific Concepts, Test Construction, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kaya Uyanik, Gulden; Demirtas Tolaman, Tugba; Gur Erdogan, Duygu – International Journal of Assessment Tools in Education, 2021
This paper aims to examine and assess the questions included in the "Turkish Common Exam" for sixth graders held in the first semester of 2018 which is one of the common exams carried out by The Measurement and Evaluation Centers, in terms of question structure, quality and taxonomic value. To this end, the test questions were examined…
Descriptors: Foreign Countries, Grade 6, Standardized Tests, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Ibrahim Kasujja; Hugo Melgar-Quinonez; Joweria Nambooze – SAGE Open, 2023
Background: School feeding programs' evaluation requires the measurement of food insecurity, a more objective indicator, within school in low-income countries. The Global Child Nutrition Foundation (GCNF) uses subjective indicators to report school feeding coverage rates across many countries that participate in the global survey of school meal…
Descriptors: Hunger, Food, Program Effectiveness, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Azevedo, Jose Manuel; Oliveira, Ema P.; Beites, Patrícia Damas – International Journal of Information and Learning Technology, 2019
Purpose: The purpose of this paper is to find appropriate forms of analysis of multiple-choice questions (MCQ) to obtain an assessment method, as fair as possible, for the students. The authors intend to ascertain if it is possible to control the quality of the MCQ contained in a bank of questions, implemented in Moodle, presenting some evidence…
Descriptors: Learning Analytics, Multiple Choice Tests, Test Theory, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko; Dimitrov, Dimiter M.; Marcoulides, George A.; Harrison, Michael – Educational and Psychological Measurement, 2019
Building on prior research on the relationships between key concepts in item response theory and classical test theory, this note contributes to highlighting their important and useful links. A readily and widely applicable latent variable modeling procedure is discussed that can be used for point and interval estimation of the individual person…
Descriptors: True Scores, Item Response Theory, Test Items, Test Theory
Albano, Anthony D.; McConnell, Scott R.; Lease, Erin M.; Cai, Liuhan – Grantee Submission, 2020
Research has shown that the context of practice tasks can have a significant impact on learning, with long-term retention and transfer improving when tasks of different types are mixed by interleaving (abcabcabc) compared with grouping together in blocks (aaabbbccc). This study examines the influence of context via interleaving from a psychometric…
Descriptors: Context Effect, Test Items, Preschool Children, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Alqarni, Abdulelah Mohammed – Journal on Educational Psychology, 2019
This study compares the psychometric properties of reliability in Classical Test Theory (CTT), item information in Item Response Theory (IRT), and validation from the perspective of modern validity theory for the purpose of bringing attention to potential issues that might exist when testing organizations use both test theories in the same testing…
Descriptors: Test Theory, Item Response Theory, Test Construction, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Eaton, Philip; Johnson, Keith; Barrett, Frank; Willoughby, Shannon – Physical Review Physics Education Research, 2019
For proper assessment selection understanding the statistical similarities amongst assessments that measure the same, or very similar, topics is imperative. This study seeks to extend the comparative analysis between the brief electricity and magnetism assessment (BEMA) and the conceptual survey of electricity and magnetism (CSEM) presented by…
Descriptors: Test Theory, Item Response Theory, Comparative Analysis, Energy
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sari, Halil Ibrahim; Karaman, Mehmet Akif – International Journal of Assessment Tools in Education, 2018
The current study shows the applications of both classical test theory (CTT) and item response theory (IRT) to the psychology data. The study discusses item level analyses of General Mattering Scale produced by the two theories as well as strengths and weaknesses of both measurement approaches. The survey consisted of a total of five Likert-type…
Descriptors: Measures (Individuals), Test Theory, Item Response Theory, Likert Scales
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ilhan, Mustafa; Guler, Nese – Eurasian Journal of Educational Research, 2018
Purpose: This study aimed to compare difficulty indices calculated for open-ended items in accordance with the classical test theory (CTT) and the Many-Facet Rasch Model (MFRM). Although theoretical differences between CTT and MFRM occupy much space in the literature, the number of studies empirically comparing the two theories is quite limited.…
Descriptors: Difficulty Level, Test Items, Test Theory, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bazvand, Ali Darabi; Kheirzadeh, Shiela; Ahmadi, Alireza – International Journal of Assessment Tools in Education, 2019
The findings of previous research into the compatibility of stakeholders' perceptions with statistical estimations of item difficulty are not seemingly consistent. Furthermore, most research shows that teachers' estimation of item difficulty is not reliable since they tend to overestimate the difficulty of easy items and underestimate the…
Descriptors: Foreign Countries, High Stakes Tests, Test Items, Difficulty Level
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4