Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 2 |
Descriptor
Difficulty Level | 6 |
Testing | 6 |
Test Items | 4 |
Higher Education | 2 |
Item Analysis | 2 |
Item Response Theory | 2 |
Models | 2 |
Response Style (Tests) | 2 |
Simulation | 2 |
Student Characteristics | 2 |
Test Construction | 2 |
More ▼ |
Source
Journal of Educational… | 6 |
Author
Albanese, Mark A. | 1 |
Huck, Schuyler W. | 1 |
Jin, Kuan-Yu | 1 |
Li, Jie | 1 |
Linn, Robert L. | 1 |
Lord, Frederic M. | 1 |
Slinde, Jefferey A. | 1 |
Wang, Wen-Chung | 1 |
Zhang, Jinming | 1 |
Publication Type
Journal Articles | 3 |
Reports - Research | 2 |
Reports - Evaluative | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Zhang, Jinming; Li, Jie – Journal of Educational Measurement, 2016
An IRT-based sequential procedure is developed to monitor items for enhancing test security. The procedure uses a series of statistical hypothesis tests to examine whether the statistical characteristics of each item under inspection have changed significantly during CAT administration. This procedure is compared with a previously developed…
Descriptors: Computer Assisted Testing, Test Items, Difficulty Level, Item Response Theory
Jin, Kuan-Yu; Wang, Wen-Chung – Journal of Educational Measurement, 2014
Sometimes, test-takers may not be able to attempt all items to the best of their ability (with full effort) due to personal factors (e.g., low motivation) or testing conditions (e.g., time limit), resulting in poor performances on certain items, especially those located toward the end of a test. Standard item response theory (IRT) models fail to…
Descriptors: Student Evaluation, Item Response Theory, Models, Simulation

Albanese, Mark A. – Journal of Educational Measurement, 1988
Estimates of the effects of use of formula scoring on the individual examinee's score are presented. Results for easy, moderate, and hard tests are examined. Using test characteristics from several studies shows that some examinees would increase scores substantially if they were to answer items omitted under formula directions. (SLD)
Descriptors: Difficulty Level, Guessing (Tests), Scores, Scoring Formulas

Slinde, Jefferey A.; Linn, Robert L. – Journal of Educational Measurement, 1978
Use of the Rasch model for vertical equating of tests is discussed. Although use of the model is promising, empirical results raise questions about the adequacy of the Rasch model. Latent trait models with more parameters may be necessary. (JKS)
Descriptors: Achievement Tests, Difficulty Level, Equated Scores, Higher Education

Lord, Frederic M. – Journal of Educational Measurement, 1971
Modifications of administration and item arrangement of a conventional test can force a match between item difficulty levels and the ability level of the examinee. Although different examinees take different sets of items, the scoring method provides comparable scores for all. Furthermore, the test is self-scoring. These advantages are obtained…
Descriptors: Academic Ability, Difficulty Level, Measurement Techniques, Models

Huck, Schuyler W. – Journal of Educational Measurement, 1978
Providing examinees with advanced knowledge of the difficulty of an item led to an increase in test performance with no loss of reliability. This finding was consistent across several test formats. ( Author/JKS)
Descriptors: Difficulty Level, Feedback, Higher Education, Item Analysis