Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 2 |
Descriptor
Difficulty Level | 8 |
Test Items | 8 |
Test Reliability | 8 |
Item Response Theory | 3 |
Comparative Analysis | 2 |
Higher Education | 2 |
Item Analysis | 2 |
Multiple Choice Tests | 2 |
Response Style (Tests) | 2 |
Simulation | 2 |
Test Construction | 2 |
More ▼ |
Source
Journal of Educational… | 8 |
Author
Forsyth, Robert A. | 1 |
Frisbie, David A. | 1 |
Garg, Rashmi | 1 |
Huck, Schuyler W. | 1 |
Jin, Kuan-Yu | 1 |
Lele, Kaustubh | 1 |
May, Kim | 1 |
Nicewander, W. Alan | 1 |
Robitzsch, Alexander | 1 |
Schipolowski, Stefan | 1 |
Schroeders, Ulrich | 1 |
More ▼ |
Publication Type
Journal Articles | 7 |
Reports - Research | 6 |
Reports - Evaluative | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Schroeders, Ulrich; Robitzsch, Alexander; Schipolowski, Stefan – Journal of Educational Measurement, 2014
C-tests are a specific variant of cloze tests that are considered time-efficient, valid indicators of general language proficiency. They are commonly analyzed with models of item response theory assuming local item independence. In this article we estimated local interdependencies for 12 C-tests and compared the changes in item difficulties,…
Descriptors: Comparative Analysis, Psychometrics, Cloze Procedure, Language Tests
Jin, Kuan-Yu; Wang, Wen-Chung – Journal of Educational Measurement, 2014
Sometimes, test-takers may not be able to attempt all items to the best of their ability (with full effort) due to personal factors (e.g., low motivation) or testing conditions (e.g., time limit), resulting in poor performances on certain items, especially those located toward the end of a test. Standard item response theory (IRT) models fail to…
Descriptors: Student Evaluation, Item Response Theory, Models, Simulation

Terwilliger, James S.; Lele, Kaustubh – Journal of Educational Measurement, 1979
Different indices for the internal consistency, reproducibility, or homogeneity of a test are based upon highly similar conceptual frameworks. Illustrations are presented to demonstrate how the maximum and minimum values of KR20 are influenced by test difficulty and the shape of the distribution of test scores. (Author/CTM)
Descriptors: Difficulty Level, Item Analysis, Mathematical Formulas, Statistical Analysis

Garg, Rashmi; And Others – Journal of Educational Measurement, 1986
For the purpose of obtaining data to use in test development, multiple matrix sampling plans were compared to examinee sampling plans. Data were simulated for examinees, sampled from a population with a normal distribution of ability, responding to items selected from an item universe. (Author/LMO)
Descriptors: Difficulty Level, Monte Carlo Methods, Sampling, Statistical Studies

May, Kim; Nicewander, W. Alan – Journal of Educational Measurement, 1994
Reliabilities and information functions for percentile ranks and number-right scores were compared using item response theory, modeling standardized achievement tests. Results demonstrate that situations exist in which the percentage of items known by examinees can be accurately estimated, but the percentage of persons falling below a given score…
Descriptors: Achievement Tests, Difficulty Level, Equations (Mathematics), Estimation (Mathematics)

Huck, Schuyler W. – Journal of Educational Measurement, 1978
Providing examinees with advanced knowledge of the difficulty of an item led to an increase in test performance with no loss of reliability. This finding was consistent across several test formats. ( Author/JKS)
Descriptors: Difficulty Level, Feedback, Higher Education, Item Analysis

Forsyth, Robert A.; Spratt, Kevin F. – Journal of Educational Measurement, 1980
The effects of two item formats on item difficulty and item discrimination indices for mathematics problem solving multiple-choice tests were investigated. One format required identifying the proper "set-up" for the item; the other format required complete solving of the item. (Author/JKS)
Descriptors: Difficulty Level, Junior High Schools, Multiple Choice Tests, Problem Solving

Frisbie, David A.; Sweeney, Daryl C. – Journal of Educational Measurement, 1982
A 100-item five-choice multiple choice (MC) biology final exam was converted to multiple choice true-false (MTF) form to yield two content-parallel test forms comprised of the two item types. Students found the MTF items easier and preferred MTF over MC; the MTF subtests were more reliable. (Author/GK)
Descriptors: Biology, College Science, Comparative Analysis, Difficulty Level