Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 4 |
Descriptor
Difficulty Level | 9 |
Error of Measurement | 9 |
Monte Carlo Methods | 9 |
Test Items | 8 |
Item Response Theory | 4 |
Mathematical Models | 4 |
Correlation | 3 |
Factor Analysis | 3 |
Statistical Bias | 3 |
Accuracy | 2 |
Comparative Analysis | 2 |
More ▼ |
Source
Educational and Psychological… | 2 |
Applied Measurement in… | 1 |
Applied Psychological… | 1 |
Journal of Educational… | 1 |
Author
Finch, Holmes | 2 |
Ahn, Soyeon | 1 |
Carlson, James E. | 1 |
Chiu, Christopher W. T. | 1 |
French, Brian F. | 1 |
Huck, Schuyler W. | 1 |
Jiao, Hong | 1 |
Jin, Ying | 1 |
Jones, Patricia B. | 1 |
Kamata, Akihito | 1 |
Park, Sung Eun | 1 |
More ▼ |
Publication Type
Reports - Research | 9 |
Journal Articles | 5 |
Speeches/Meeting Papers | 3 |
Education Level
Audience
Researchers | 2 |
Location
Laws, Policies, & Programs
Assessments and Surveys
ACT Assessment | 1 |
What Works Clearinghouse Rating
Park, Sung Eun; Ahn, Soyeon; Zopluoglu, Cengiz – Educational and Psychological Measurement, 2021
This study presents a new approach to synthesizing differential item functioning (DIF) effect size: First, using correlation matrices from each study, we perform a multigroup confirmatory factor analysis (MGCFA) that examines measurement invariance of a test item between two subgroups (i.e., focal and reference groups). Then we synthesize, across…
Descriptors: Item Analysis, Effect Size, Difficulty Level, Monte Carlo Methods
Finch, Holmes; French, Brian F. – Applied Measurement in Education, 2019
The usefulness of item response theory (IRT) models depends, in large part, on the accuracy of item and person parameter estimates. For the standard 3 parameter logistic model, for example, these parameters include the item parameters of difficulty, discrimination, and pseudo-chance, as well as the person ability parameter. Several factors impact…
Descriptors: Item Response Theory, Accuracy, Test Items, Difficulty Level
Jiao, Hong; Kamata, Akihito; Wang, Shudong; Jin, Ying – Journal of Educational Measurement, 2012
The applications of item response theory (IRT) models assume local item independence and that examinees are independent of each other. When a representative sample for psychometric analysis is selected using a cluster sampling method in a testlet-based assessment, both local item dependence and local person dependence are likely to be induced.…
Descriptors: Item Response Theory, Test Items, Markov Processes, Monte Carlo Methods
Finch, Holmes – Applied Psychological Measurement, 2010
The accuracy of item parameter estimates in the multidimensional item response theory (MIRT) model context is one that has not been researched in great detail. This study examines the ability of two confirmatory factor analysis models specifically for dichotomous data to properly estimate item parameters using common formulae for converting factor…
Descriptors: Item Response Theory, Computation, Factor Analysis, Models
Chiu, Christopher W. T. – 2000
A procedure was developed to analyze data with missing observations by extracting data from a sparsely filled data matrix into analyzable smaller subsets of data. This subdividing method, based on the conceptual framework of meta-analysis, was accomplished by creating data sets that exhibit structural designs and then pooling variance components…
Descriptors: Difficulty Level, Error of Measurement, Generalizability Theory, Interrater Reliability

Huck, Schuyler W.; And Others – Educational and Psychological Measurement, 1981
Believing that examinee-by-item interaction should be conceptualized as true score variability rather than as a result of errors of measurement, Lu proposed a modification of Hoyt's analysis of variance reliability procedure. Via a computer simulation study, it is shown that Lu's approach does not separate interaction from error. (Author/RL)
Descriptors: Analysis of Variance, Comparative Analysis, Computer Programs, Difficulty Level
Jones, Patricia B.; And Others – 1987
In order to determine the effectiveness of multidimensional scaling (MDS) in recovering the dimensionality of a set of dichotomously-scored items, data were simulated in one, two, and three dimensions for a variety of correlations with the underlying latent trait. Similarity matrices were constructed from these data using three margin-sensitive…
Descriptors: Cluster Analysis, Correlation, Difficulty Level, Error of Measurement
Patience, Wayne M.; Reckase, Mark D. – 1979
An experiment was performed with computer-generated data to investigate some of the operational characteristics of tailored testing as they are related to various provisions of the computer program and item pool. With respect to the computer program, two characteristics were varied: the size of the step of increase or decrease in item difficulty…
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Error of Measurement
Carlson, James E.; Spray, Judith A. – 1986
This paper discussed methods currently under study for use with multiple-response data. Besides using Bonferroni inequality methods to control type one error rate over a set of inferences involving multiple response data, a recently proposed methodology of plotting the p-values resulting from multiple significance tests was explored. Proficiency…
Descriptors: Cutting Scores, Data Analysis, Difficulty Level, Error of Measurement