Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 7 |
Descriptor
Error Patterns | 8 |
Test Items | 8 |
Item Response Theory | 4 |
Evaluation Methods | 3 |
Probability | 3 |
Test Bias | 3 |
Computer Assisted Testing | 2 |
Difficulty Level | 2 |
Graphs | 2 |
Intervals | 2 |
Measurement Techniques | 2 |
More ▼ |
Source
Educational and Psychological… | 8 |
Author
Brennan, Robert L. | 1 |
DeMars, Christine E. | 1 |
Green, Kathy | 1 |
Hauser, Carl | 1 |
He, Wei | 1 |
Kalinowski, Steven T. | 1 |
Kim, Eun Sook | 1 |
Kim, Stella Y. | 1 |
Lee, Chansoon | 1 |
Lee, Taehun | 1 |
Lee, Won-Chan | 1 |
More ▼ |
Publication Type
Journal Articles | 8 |
Reports - Research | 8 |
Education Level
Elementary Secondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Brennan, Robert L.; Kim, Stella Y.; Lee, Won-Chan – Educational and Psychological Measurement, 2022
This article extends multivariate generalizability theory (MGT) to tests with different random-effects designs for each level of a fixed facet. There are numerous situations in which the design of a test and the resulting data structure are not definable by a single design. One example is mixed-format tests that are composed of multiple-choice and…
Descriptors: Multivariate Analysis, Generalizability Theory, Multiple Choice Tests, Test Construction
Lee, Chansoon; Qian, Hong – Educational and Psychological Measurement, 2022
Using classical test theory and item response theory, this study applied sequential procedures to a real operational item pool in a variable-length computerized adaptive testing (CAT) to detect items whose security may be compromised. Moreover, this study proposed a hybrid threshold approach to improve the detection power of the sequential…
Descriptors: Computer Assisted Testing, Adaptive Testing, Licensing Examinations (Professions), Item Response Theory
Kalinowski, Steven T. – Educational and Psychological Measurement, 2019
Item response theory (IRT) is a statistical paradigm for developing educational tests and assessing students. IRT, however, currently lacks an established graphical method for examining model fit for the three-parameter logistic model, the most flexible and popular IRT model in educational testing. A method is presented here to do this. The graph,…
Descriptors: Item Response Theory, Educational Assessment, Goodness of Fit, Probability
Socha, Alan; DeMars, Christine E. – Educational and Psychological Measurement, 2013
Modeling multidimensional test data with a unidimensional model can result in serious statistical errors, such as bias in item parameter estimates. Many methods exist for assessing the dimensionality of a test. The current study focused on DIMTEST. Using simulated data, the effects of sample size splitting for use with the ATFIND procedure for…
Descriptors: Sample Size, Test Length, Correlation, Test Format
Kim, Eun Sook; Yoon, Myeongsun; Lee, Taehun – Educational and Psychological Measurement, 2012
Multiple-indicators multiple-causes (MIMIC) modeling is often used to test a latent group mean difference while assuming the equivalence of factor loadings and intercepts over groups. However, this study demonstrated that MIMIC was insensitive to the presence of factor loading noninvariance, which implies that factor loading invariance should be…
Descriptors: Test Items, Simulation, Testing, Statistical Analysis
Hauser, Carl; Thum, Yeow Meng; He, Wei; Ma, Lingling – Educational and Psychological Measurement, 2015
When conducting item reviews, analysts evaluate an array of statistical and graphical information to assess the fit of a field test (FT) item to an item response theory model. The process can be tedious, particularly when the number of human reviews (HR) to be completed is large. Furthermore, such a process leads to decisions that are susceptible…
Descriptors: Test Items, Item Response Theory, Research Methodology, Decision Making
Yoo, Jin Eun – Educational and Psychological Measurement, 2009
This Monte Carlo study investigates the beneficiary effect of including auxiliary variables during estimation of confirmatory factor analysis models with multiple imputation. Specifically, it examines the influence of sample size, missing rates, missingness mechanism combinations, missingness types (linear or convex), and the absence or presence…
Descriptors: Monte Carlo Methods, Research Methodology, Test Validity, Factor Analysis

Green, Kathy – Educational and Psychological Measurement, 1984
Two factors, language difficulty and option set convergence, were experimentally manipulated and their effects on item difficulty assessed. Option convergence was found to have a significant effect on item difficulty while the effect of language difficulty was not significant. (Author/BW)
Descriptors: Difficulty Level, Error Patterns, Higher Education, Multiple Choice Tests