Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 4 |
Descriptor
Test Items | 5 |
Testing Programs | 5 |
Item Response Theory | 4 |
Computation | 2 |
Difficulty Level | 2 |
Academic Ability | 1 |
Artificial Intelligence | 1 |
Beliefs | 1 |
Computer Software | 1 |
Criterion Referenced Tests | 1 |
Cutting Scores | 1 |
More ▼ |
Source
Educational and Psychological… | 5 |
Author
Wyse, Adam E. | 2 |
Babcock, Ben | 1 |
Chen, Hui-Fang | 1 |
Fan, Xitao | 1 |
Huggins-Manley, Anne Corinne | 1 |
Jin, Kuan-Yu | 1 |
Leite, Walter | 1 |
Wang, Wen-Chung | 1 |
Xue, Kang | 1 |
Publication Type
Journal Articles | 5 |
Reports - Research | 5 |
Education Level
Elementary Education | 1 |
Grade 11 | 1 |
Grade 5 | 1 |
Grade 8 | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Secondary Education | 1 |
Audience
Location
Florida | 1 |
Hong Kong | 1 |
United States | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 1 |
Trends in International… | 1 |
What Works Clearinghouse Rating
Xue, Kang; Huggins-Manley, Anne Corinne; Leite, Walter – Educational and Psychological Measurement, 2022
In data collected from virtual learning environments (VLEs), item response theory (IRT) models can be used to guide the ongoing measurement of student ability. However, such applications of IRT rely on unbiased item parameter estimates associated with test items in the VLE. Without formal piloting of the items, one can expect a large amount of…
Descriptors: Virtual Classrooms, Artificial Intelligence, Item Response Theory, Item Analysis
Wyse, Adam E.; Babcock, Ben – Educational and Psychological Measurement, 2016
Continuously administered examination programs, particularly credentialing programs that require graduation from educational programs, often experience seasonality where distributions of examine ability may differ over time. Such seasonality may affect the quality of important statistical processes, such as item response theory (IRT) item…
Descriptors: Test Items, Item Response Theory, Computation, Licensing Examinations (Professions)
Wang, Wen-Chung; Chen, Hui-Fang; Jin, Kuan-Yu – Educational and Psychological Measurement, 2015
Many scales contain both positively and negatively worded items. Reverse recoding of negatively worded items might not be enough for them to function as positively worded items do. In this study, we commented on the drawbacks of existing approaches to wording effect in mixed-format scales and used bi-factor item response theory (IRT) models to…
Descriptors: Item Response Theory, Test Format, Language Usage, Test Items
Wyse, Adam E. – Educational and Psychological Measurement, 2011
Standard setting is a method used to set cut scores on large-scale assessments. One of the most popular standard setting methods is the Bookmark method. In the Bookmark method, panelists are asked to envision a response probability (RP) criterion and move through a booklet of ordered items based on a RP criterion. This study investigates whether…
Descriptors: Testing Programs, Standard Setting (Scoring), Cutting Scores, Probability

Fan, Xitao – Educational and Psychological Measurement, 1998
This study empirically examined the behaviors of item and person statistics derived from item response theory and classical test theory, focusing on item and person statistics and using a large-scale statewide assessment. Findings show that the person and item statistics from the two measurement frameworks are quite comparable. (SLD)
Descriptors: Item Response Theory, State Programs, Statistical Analysis, Test Items