Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 27 |
Since 2006 (last 20 years) | 88 |
Descriptor
Difficulty Level | 130 |
Test Bias | 130 |
Test Items | 130 |
Item Response Theory | 42 |
Item Analysis | 35 |
Foreign Countries | 30 |
Comparative Analysis | 26 |
Mathematics Tests | 25 |
Test Construction | 22 |
Correlation | 21 |
Scores | 21 |
More ▼ |
Source
Author
Dorans, Neil J. | 3 |
Magis, David | 3 |
Santelices, Maria Veronica | 3 |
Wilson, Mark | 3 |
Baghaei, Purya | 2 |
Baird, Jo-Anne | 2 |
Camilli, Gregory | 2 |
De Boeck, Paul | 2 |
DeMars, Christine E. | 2 |
Facon, Bruno | 2 |
Gonzalez-Tamayo, Eulogio | 2 |
More ▼ |
Publication Type
Education Level
Audience
Researchers | 2 |
Location
Germany | 4 |
Turkey | 4 |
United States | 4 |
Belgium | 3 |
California | 2 |
Florida | 2 |
Hong Kong | 2 |
Algeria | 1 |
Austria | 1 |
Canada | 1 |
Denmark | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Bolt, Daniel M.; Liao, Xiangyi – Journal of Educational Measurement, 2021
We revisit the empirically observed positive correlation between DIF and difficulty studied by Freedle and commonly seen in tests of verbal proficiency when comparing populations of different mean latent proficiency levels. It is shown that a positive correlation between DIF and difficulty estimates is actually an expected result (absent any true…
Descriptors: Test Bias, Difficulty Level, Correlation, Verbal Tests
Xue, Kang; Huggins-Manley, Anne Corinne; Leite, Walter – Grantee Submission, 2020
In data collected from virtual learning environments (VLEs), item response theory (IRT) models can be used to guide the ongoing measurement of student ability. However, such applications of IRT rely on unbiased item parameter estimates associated with test items in the VLE. Without formal piloting of the items, one can expect a large amount of…
Descriptors: Virtual Classrooms, Item Response Theory, Test Bias, Test Items
Rodriguez, Rebekah M.; Silvia, Paul J.; Kaufman, James C.; Reiter-Palmon, Roni; Puryear, Jeb S. – Creativity Research Journal, 2023
The original 90-item Creative Behavior Inventory (CBI) was a landmark self-report scale in creativity research, and the 28-item brief form developed nearly 20 years ago continues to be a popular measure of everyday creativity. Relatively little is known, however, about the psychometric properties of this widely used scale. In the current research,…
Descriptors: Creativity Tests, Creativity, Creative Thinking, Psychometrics
Huggins-Manley, Anne Corinne; Qiu, Yuxi; Penfield, Randall D. – International Journal of Testing, 2018
Score equity assessment (SEA) refers to an examination of population invariance of equating across two or more subpopulations of test examinees. Previous SEA studies have shown that score equity may be present for examinees scoring at particular test score ranges but absent for examinees scoring at other score ranges. No studies to date have…
Descriptors: Equated Scores, Test Bias, Test Items, Difficulty Level
Tim Jacobbe; Bob delMas; Brad Hartlaub; Jeff Haberstroh; Catherine Case; Steven Foti; Douglas Whitaker – Numeracy, 2023
The development of assessments as part of the funded LOCUS project is described. The assessments measure students' conceptual understanding of statistics as outlined in the GAISE PreK-12 Framework. Results are reported from a large-scale administration to 3,430 students in grades 6 through 12 in the United States. Items were designed to assess…
Descriptors: Statistics Education, Common Core State Standards, Student Evaluation, Elementary School Students
Qi Huang; Daniel M. Bolt; Weicong Lyu – Large-scale Assessments in Education, 2024
Large scale international assessments depend on invariance of measurement across countries. An important consideration when observing cross-national differential item functioning (DIF) is whether the DIF actually reflects a source of bias, or might instead be a methodological artifact reflecting item response theory (IRT) model misspecification.…
Descriptors: Test Items, Item Response Theory, Test Bias, Test Validity
Kuang, Huan; Sahin, Fusun – Large-scale Assessments in Education, 2023
Background: Examinees may not make enough effort when responding to test items if the assessment has no consequence for them. These disengaged responses can be problematic in low-stakes, large-scale assessments because they can bias item parameter estimates. However, the amount of bias, and whether this bias is similar across administrations, is…
Descriptors: Test Items, Comparative Analysis, Mathematics Tests, Reaction Time
Parry, James R. – Online Submission, 2020
This paper presents research and provides a method to ensure that parallel assessments, that are generated from a large test-item database, maintain equitable difficulty and content coverage each time the assessment is presented. To maintain fairness and validity it is important that all instances of an assessment, that is intended to test the…
Descriptors: Culture Fair Tests, Difficulty Level, Test Items, Test Validity
Bundsgaard, Jeppe – Large-scale Assessments in Education, 2019
International large-scale assessments like international computer and information literacy study (ICILS) (Fraillon et al. in International Association for the Evaluation of Educational Achievement (IEA), 2015) provide important empirically-based knowledge through the proficiency scales, of what characterizes tasks at different difficulty levels,…
Descriptors: Test Bias, International Assessment, Test Items, Difficulty Level
Rutkowski, Leslie; Rutkowski, David; Liaw, Yuan-Ling – Assessment in Education: Principles, Policy & Practice, 2019
Modern international studies of educational achievement have grown in terms of participating educational systems. Accompanying this development is an increase in heterogeneity, as more and different kinds of educational systems take part. This growth has been particularly pronounced among low-performing, less economically developed systems.…
Descriptors: International Assessment, Secondary School Students, Foreign Countries, Achievement Tests
Andrich, David; Marais, Ida – Journal of Educational Measurement, 2018
Even though guessing biases difficulty estimates as a function of item difficulty in the dichotomous Rasch model, assessment programs with tests which include multiple-choice items often construct scales using this model. Research has shown that when all items are multiple-choice, this bias can largely be eliminated. However, many assessments have…
Descriptors: Multiple Choice Tests, Test Items, Guessing (Tests), Test Bias
Retnawati, Heri – International Journal of Assessment Tools in Education, 2018
The study was to identify the load, the type and the significance of differential item functioning (DIF) in constructed response item using the partial credit model (PCM). The data in the study were the students' instruments and the students' responses toward the PISA-like test items that had been completed by 386 ninth grade students and 460…
Descriptors: Test Bias, Test Items, Responses, Grade 9
Rutkowski, David; Rutkowski, Leslie; Liaw, Yuan-Ling – Educational Measurement: Issues and Practice, 2018
Participation in international large-scale assessments has grown over time with the largest, the Programme for International Student Assessment (PISA), including more than 70 education systems that are economically and educationally diverse. To help accommodate for large achievement differences among participants, in 2009 PISA offered…
Descriptors: Educational Assessment, Foreign Countries, Achievement Tests, Secondary School Students
Matlock, Ki Lynn; Turner, Ronna – Educational and Psychological Measurement, 2016
When constructing multiple test forms, the number of items and the total test difficulty are often equivalent. Not all test developers match the number of items and/or average item difficulty within subcontent areas. In this simulation study, six test forms were constructed having an equal number of items and average item difficulty overall.…
Descriptors: Item Response Theory, Computation, Test Items, Difficulty Level
Setari, Anthony Philip – ProQuest LLC, 2016
The purpose of this study was to construct a holistic education school evaluation tool using Montessori Erdkinder principles, and begin the validation process of examining the proposed tool. This study addresses a vital need in the holistic education community for a school evaluation tool. The tool construction process included using Erdkinder…
Descriptors: Holistic Approach, Montessori Method, Test Bias, Item Response Theory