Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 13 |
Since 2006 (last 20 years) | 21 |
Descriptor
Difficulty Level | 25 |
Item Response Theory | 25 |
Testing | 25 |
Test Items | 24 |
Comparative Analysis | 8 |
Computer Assisted Testing | 6 |
Models | 6 |
Test Construction | 5 |
Test Format | 5 |
Foreign Countries | 4 |
Item Analysis | 4 |
More ▼ |
Source
Author
Herrmann-Abell, Cari F. | 2 |
Ali, Usama | 1 |
Alonzo, Julie | 1 |
Aryadoust, Vahid | 1 |
Baghaei, Purya | 1 |
Bolsinova, Maria | 1 |
Bramley, Tom | 1 |
Brinkhuis, Matthieu J. S. | 1 |
Brown, Terran | 1 |
Chen, Jianshen | 1 |
Costanzo, Kate | 1 |
More ▼ |
Publication Type
Reports - Research | 17 |
Journal Articles | 16 |
Reports - Evaluative | 5 |
Speeches/Meeting Papers | 5 |
Reports - Descriptive | 2 |
Dissertations/Theses -… | 1 |
Numerical/Quantitative Data | 1 |
Education Level
Audience
Laws, Policies, & Programs
Assessments and Surveys
Advanced Placement… | 1 |
International English… | 1 |
What Works Clearinghouse Rating
Lang, Joseph B. – Journal of Educational and Behavioral Statistics, 2023
This article is concerned with the statistical detection of copying on multiple-choice exams. As an alternative to existing permutation- and model-based copy-detection approaches, a simple randomization p-value (RP) test is proposed. The RP test, which is based on an intuitive match-score statistic, makes no assumptions about the distribution of…
Descriptors: Identification, Cheating, Multiple Choice Tests, Item Response Theory
Ross, Linette P. – ProQuest LLC, 2022
One of the most serious forms of cheating occurs when examinees have item preknowledge and prior access to secure test material before taking an exam for the purpose of obtaining an inflated test score. Examinees that cheat and have prior knowledge of test content before testing may have an unfair advantage over examinees that do not cheat. Item…
Descriptors: Testing, Deception, Cheating, Identification
Lozano, José H.; Revuelta, Javier – Applied Measurement in Education, 2021
The present study proposes a Bayesian approach for estimating and testing the operation-specific learning model, a variant of the linear logistic test model that allows for the measurement of the learning that occurs during a test as a result of the repeated use of the operations involved in the items. The advantages of using a Bayesian framework…
Descriptors: Bayesian Statistics, Computation, Learning, Testing
Peabody, Michael R.; Wind, Stefanie A. – Measurement: Interdisciplinary Research and Perspectives, 2019
Differential Item Functioning (DIF) detection procedures provide validity evidence for proposed interpretations of test scores that can help researchers and practitioners ensure that test scores are free from potential bias, and that individual items do not create an advantage for any subgroup of examinees over another. In this study, we use the…
Descriptors: Item Response Theory, Test Items, Scores, Testing
Lu, Ru; Guo, Hongwen; Dorans, Neil J. – ETS Research Report Series, 2021
Two families of analysis methods can be used for differential item functioning (DIF) analysis. One family is DIF analysis based on observed scores, such as the Mantel-Haenszel (MH) and the standardized proportion-correct metric for DIF procedures; the other is analysis based on latent ability, in which the statistic is a measure of departure from…
Descriptors: Robustness (Statistics), Weighted Scores, Test Items, Item Analysis
Xue, Kang; Huggins-Manley, Anne Corinne; Leite, Walter – Educational and Psychological Measurement, 2022
In data collected from virtual learning environments (VLEs), item response theory (IRT) models can be used to guide the ongoing measurement of student ability. However, such applications of IRT rely on unbiased item parameter estimates associated with test items in the VLE. Without formal piloting of the items, one can expect a large amount of…
Descriptors: Virtual Classrooms, Artificial Intelligence, Item Response Theory, Item Analysis
Bramley, Tom; Crisp, Victoria – Assessment in Education: Principles, Policy & Practice, 2019
For many years, question choice has been used in some UK public examinations, with students free to choose which questions they answer from a selection (within certain parameters). There has been little published research on choice of exam questions in recent years in the UK. In this article we distinguish different scenarios in which choice…
Descriptors: Test Items, Test Construction, Difficulty Level, Foreign Countries
Hofman, Abe D.; Brinkhuis, Matthieu J. S.; Bolsinova, Maria; Klaiber, Jonathan; Maris, Gunter; van der Maas, Han L. J. – Journal of Intelligence, 2020
One of the highest ambitions in educational technology is the move towards personalized learning. To this end, computerized adaptive learning (CAL) systems are developed. A popular method to track the development of student ability and item difficulty, in CAL systems, is the Elo Rating System (ERS). The ERS allows for dynamic model parameters by…
Descriptors: Teaching Methods, Computer Assisted Instruction, Difficulty Level, Individualized Instruction
Ojerinde, Dibu; Popoola, Omokunmi; Onyeneho, Patrick; Egberongbe, Aminat – Perspectives in Education, 2016
Statistical procedure used in adjusting test score difficulties on test forms is known as "equating". Equating makes it possible for various test forms to be used interchangeably. In terms of where the equating method fits in the assessment cycle, there are pre-equating and post-equating methods. The major benefits of pre-equating, when…
Descriptors: Measurement, Comparative Analysis, High Stakes Tests, Pretests Posttests
Volov, Vyacheslav T.; Gilev, Alexander A. – International Journal of Environmental and Science Education, 2016
In today's item response theory (IRT) the response to the test item is considered as a probability event depending on the student's ability and difficulty of items. It is noted that in the scientific literature there is very little agreement about how to determine factors affecting the item difficulty. It is suggested that the difficulty of the…
Descriptors: Item Response Theory, Test Items, Difficulty Level, Science Tests
Zhang, Jinming; Li, Jie – Journal of Educational Measurement, 2016
An IRT-based sequential procedure is developed to monitor items for enhancing test security. The procedure uses a series of statistical hypothesis tests to examine whether the statistical characteristics of each item under inspection have changed significantly during CAT administration. This procedure is compared with a previously developed…
Descriptors: Computer Assisted Testing, Test Items, Difficulty Level, Item Response Theory
Baghaei, Purya; Aryadoust, Vahid – International Journal of Testing, 2015
Research shows that test method can exert a significant impact on test takers' performance and thereby contaminate test scores. We argue that common test method can exert the same effect as common stimuli and violate the conditional independence assumption of item response theory models because, in general, subsets of items which have a shared…
Descriptors: Test Format, Item Response Theory, Models, Test Items
Liu, Junhui; Brown, Terran; Chen, Jianshen; Ali, Usama; Hou, Likun; Costanzo, Kate – Partnership for Assessment of Readiness for College and Careers, 2016
The Partnership for Assessment of Readiness for College and Careers (PARCC) is a state-led consortium working to develop next-generation assessments that more accurately, compared to previous assessments, measure student progress toward college and career readiness. The PARCC assessments include both English Language Arts/Literacy (ELA/L) and…
Descriptors: Testing, Achievement Tests, Test Items, Test Bias
Jin, Kuan-Yu; Wang, Wen-Chung – Journal of Educational Measurement, 2014
Sometimes, test-takers may not be able to attempt all items to the best of their ability (with full effort) due to personal factors (e.g., low motivation) or testing conditions (e.g., time limit), resulting in poor performances on certain items, especially those located toward the end of a test. Standard item response theory (IRT) models fail to…
Descriptors: Student Evaluation, Item Response Theory, Models, Simulation
Mitchell, Alison M.; Truckenmiller, Adrea; Petscher, Yaacov – Communique, 2015
As part of the Race to the Top initiative, the United States Department of Education made nearly 1 billion dollars available in State Educational Technology grants with the goal of ramping up school technology. One result of this effort is that states, districts, and schools across the country are using computerized assessments to measure their…
Descriptors: Computer Assisted Testing, Educational Technology, Testing, Efficiency
Previous Page | Next Page »
Pages: 1 | 2