Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 4 |
Descriptor
Simulation | 4 |
Statistical Analysis | 4 |
Item Response Theory | 3 |
Measurement | 3 |
Educational Assessment | 2 |
Psychometrics | 2 |
Standardized Tests | 2 |
Test Bias | 2 |
Test Items | 2 |
Validity | 2 |
Ability Grouping | 1 |
More ▼ |
Source
International Journal of… | 4 |
Author
Cohen, Allan S. | 1 |
Duong, Minh Q. | 1 |
Gattamorta, Karina A. | 1 |
Harring, Jeffery R. | 1 |
Man, Kaiwen | 1 |
Myers, Nicholas D. | 1 |
Ouyang, Yunbo | 1 |
Patton, Jeffrey | 1 |
Penfield, Randall D. | 1 |
Thomas, Sarah L. | 1 |
Wells, Craig S. | 1 |
More ▼ |
Publication Type
Journal Articles | 4 |
Reports - Research | 4 |
Education Level
Elementary Secondary Education | 1 |
Audience
Location
Canada | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Man, Kaiwen; Harring, Jeffery R.; Ouyang, Yunbo; Thomas, Sarah L. – International Journal of Testing, 2018
Many important high-stakes decisions--college admission, academic performance evaluation, and even job promotion--depend on accurate and reliable scores from valid large-scale assessments. However, examinees sometimes cheat by copying answers from other test-takers or practicing with test items ahead of time, which can undermine the effectiveness…
Descriptors: Reaction Time, High Stakes Tests, Test Wiseness, Cheating
Duong, Minh Q.; von Davier, Alina A. – International Journal of Testing, 2012
Test equating is a statistical procedure for adjusting for test form differences in difficulty in a standardized assessment. Equating results are supposed to hold for a specified target population (Kolen & Brennan, 2004; von Davier, Holland, & Thayer, 2004) and to be (relatively) independent of the subpopulations from the target population (see…
Descriptors: Ability Grouping, Difficulty Level, Psychometrics, Statistical Analysis
Gattamorta, Karina A.; Penfield, Randall D.; Myers, Nicholas D. – International Journal of Testing, 2012
Measurement invariance is a common consideration in the evaluation of the validity and fairness of test scores when the tested population contains distinct groups of examinees, such as examinees receiving different forms of a translated test. Measurement invariance in polytomous items has traditionally been evaluated at the item-level,…
Descriptors: Foreign Countries, Psychometrics, Test Bias, Test Items
Wells, Craig S.; Cohen, Allan S.; Patton, Jeffrey – International Journal of Testing, 2009
A primary concern with testing differential item functioning (DIF) using a traditional point-null hypothesis is that a statistically significant result does not imply that the magnitude of DIF is of practical interest. Similarly, for a given sample size, a non-significant result does not allow the researcher to conclude the item is free of DIF. To…
Descriptors: Test Bias, Test Items, Statistical Analysis, Hypothesis Testing