Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 5 |
Descriptor
Ability | 7 |
Error of Measurement | 7 |
Test Length | 7 |
Item Response Theory | 5 |
Statistical Bias | 4 |
Test Items | 4 |
Comparative Analysis | 3 |
Differences | 3 |
Sample Size | 3 |
Simulation | 3 |
Adaptive Testing | 2 |
More ▼ |
Source
Applied Measurement in… | 1 |
ETS Research Report Series | 1 |
Educational Sciences: Theory… | 1 |
Educational and Psychological… | 1 |
International Journal of… | 1 |
ProQuest LLC | 1 |
Author
Lee, Yi-Hsuan | 2 |
Zhang, Jinming | 2 |
A. Corinne Huggins-Manley | 1 |
Arsan, Nihan | 1 |
Atalay Kabasakal, Kübra | 1 |
Ban, Jae-Chun | 1 |
Bergstrom, Betty A. | 1 |
Eric A. Wright | 1 |
Gök, Bilge | 1 |
Kelecioglu, Hülya | 1 |
M. David Miller | 1 |
More ▼ |
Publication Type
Journal Articles | 5 |
Reports - Research | 5 |
Dissertations/Theses -… | 1 |
Reports - Evaluative | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Audience
Location
Turkey | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Advanced Placement… | 1 |
What Works Clearinghouse Rating
Ziying Li; A. Corinne Huggins-Manley; Walter L. Leite; M. David Miller; Eric A. Wright – Educational and Psychological Measurement, 2022
The unstructured multiple-attempt (MA) item response data in virtual learning environments (VLEs) are often from student-selected assessment data sets, which include missing data, single-attempt responses, multiple-attempt responses, and unknown growth ability across attempts, leading to a complex and complicated scenario for using this kind of…
Descriptors: Sequential Approach, Item Response Theory, Data, Simulation
Lee, Yi-Hsuan; Zhang, Jinming – International Journal of Testing, 2017
Simulations were conducted to examine the effect of differential item functioning (DIF) on measurement consequences such as total scores, item response theory (IRT) ability estimates, and test reliability in terms of the ratio of true-score variance to observed-score variance and the standard error of estimation for the IRT ability parameter. The…
Descriptors: Test Bias, Test Reliability, Performance, Scores
Atalay Kabasakal, Kübra; Arsan, Nihan; Gök, Bilge; Kelecioglu, Hülya – Educational Sciences: Theory and Practice, 2014
This simulation study compared the performances (Type I error and power) of Mantel-Haenszel (MH), SIBTEST, and item response theory-likelihood ratio (IRT-LR) methods under certain conditions. Manipulated factors were sample size, ability differences between groups, test length, the percentage of differential item functioning (DIF), and underlying…
Descriptors: Comparative Analysis, Item Response Theory, Statistical Analysis, Test Bias
Wang, Wei – ProQuest LLC, 2013
Mixed-format tests containing both multiple-choice (MC) items and constructed-response (CR) items are now widely used in many testing programs. Mixed-format tests often are considered to be superior to tests containing only MC items although the use of multiple item formats leads to measurement challenges in the context of equating conducted under…
Descriptors: Equated Scores, Test Format, Test Items, Test Length
Lee, Yi-Hsuan; Zhang, Jinming – ETS Research Report Series, 2008
The method of maximum-likelihood is typically applied to item response theory (IRT) models when the ability parameter is estimated while conditioning on the true item parameters. In practice, the item parameters are unknown and need to be estimated first from a calibration sample. Lewis (1985) and Zhang and Lu (2007) proposed the expected response…
Descriptors: Item Response Theory, Comparative Analysis, Computation, Ability
Yi, Qing; Wang, Tianyou; Ban, Jae-Chun – 2000
Error indices (bias, standard error of estimation, and root mean square error) obtained on different scales of measurement under different test termination rules in a computerized adaptive test (CAT) context were examined. Four ability estimation methods were studied: (1) maximum likelihood estimation (MLE); (2) weighted likelihood estimation…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Error of Measurement

Bergstrom, Betty A.; And Others – Applied Measurement in Education, 1992
Effects of altering test difficulty on examinee ability measures and test length in a computer adaptive test were studied for 225 medical technology students in 3 test difficulty conditions. Results suggest that, with an item pool of sufficient depth and breadth, acceptable targeting to test difficulty is possible. (SLD)
Descriptors: Ability, Adaptive Testing, Change, College Students