NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Location
Turkey1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 19 results Save | Export
Wang, Chun; Chen, Ping; Jiang, Shengyu – Grantee Submission, 2019
Many large-scale educational surveys have moved from linear form design to multistage testing (MST) design. One advantage of MST is that it can provide more accurate latent trait [theta] estimates using fewer items than required by linear tests. However, MST generates incomplete response data by design; hence questions remain as to how to…
Descriptors: Adaptive Testing, Test Items, Item Response Theory, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Chengyu Cui; Chun Wang; Gongjun Xu – Grantee Submission, 2024
Multidimensional item response theory (MIRT) models have generated increasing interest in the psychometrics literature. Efficient approaches for estimating MIRT models with dichotomous responses have been developed, but constructing an equally efficient and robust algorithm for polytomous models has received limited attention. To address this gap,…
Descriptors: Item Response Theory, Accuracy, Simulation, Psychometrics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kilic, Abdullah Faruk; Dogan, Nuri – International Journal of Assessment Tools in Education, 2021
Weighted least squares (WLS), weighted least squares mean-and-variance-adjusted (WLSMV), unweighted least squares mean-and-variance-adjusted (ULSMV), maximum likelihood (ML), robust maximum likelihood (MLR) and Bayesian estimation methods were compared in mixed item response type data via Monte Carlo simulation. The percentage of polytomous items,…
Descriptors: Factor Analysis, Computation, Least Squares Statistics, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, Holmes; French, Brian F. – Applied Measurement in Education, 2019
The usefulness of item response theory (IRT) models depends, in large part, on the accuracy of item and person parameter estimates. For the standard 3 parameter logistic model, for example, these parameters include the item parameters of difficulty, discrimination, and pseudo-chance, as well as the person ability parameter. Several factors impact…
Descriptors: Item Response Theory, Accuracy, Test Items, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Cetin-Berber, Dee Duygu; Sari, Halil Ibrahim; Huggins-Manley, Anne Corinne – Educational and Psychological Measurement, 2019
Routing examinees to modules based on their ability level is a very important aspect in computerized adaptive multistage testing. However, the presence of missing responses may complicate estimation of examinee ability, which may result in misrouting of individuals. Therefore, missing responses should be handled carefully. This study investigated…
Descriptors: Computer Assisted Testing, Adaptive Testing, Error of Measurement, Research Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Woo-yeol; Cho, Sun-Joo – Journal of Educational Measurement, 2017
Cross-level invariance in a multilevel item response model can be investigated by testing whether the within-level item discriminations are equal to the between-level item discriminations. Testing the cross-level invariance assumption is important to understand constructs in multilevel data. However, in most multilevel item response model…
Descriptors: Test Items, Item Response Theory, Item Analysis, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Culpepper, Steven Andrew – Multivariate Behavioral Research, 2009
This study linked nonlinear profile analysis (NPA) of dichotomous responses with an existing family of item response theory models and generalized latent variable models (GLVM). The NPA method offers several benefits over previous internal profile analysis methods: (a) NPA is estimated with maximum likelihood in a GLVM framework rather than…
Descriptors: Profiles, Item Response Theory, Models, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lee, Yi-Hsuan; Zhang, Jinming – ETS Research Report Series, 2008
The method of maximum-likelihood is typically applied to item response theory (IRT) models when the ability parameter is estimated while conditioning on the true item parameters. In practice, the item parameters are unknown and need to be estimated first from a calibration sample. Lewis (1985) and Zhang and Lu (2007) proposed the expected response…
Descriptors: Item Response Theory, Comparative Analysis, Computation, Ability
Li, Yuan H.; Lissitz, Robert W. – 2000
The analytically derived expected asymptotic standard errors (SEs) of maximum likelihood (ML) item estimates can be predicted by a mathematical function without examinees' responses to test items. The empirically determined SEs of marginal maximum likelihood estimation/Bayesian item estimates can be obtained when the same set of items is…
Descriptors: Error of Measurement, Estimation (Mathematics), Item Response Theory, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Ban, Jae-Chun; Hanson, Bradley A.; Yi, Qing; Harris, Deborah J. – Journal of Educational Measurement, 2002
Compared three online pretest calibration scaling methods through simulation: (1) marginal maximum likelihood with one expectation maximization (EM) cycle (OEM) method; (2) marginal maximum likelihood with multiple EM cycles (MEM); and (3) M. Stocking's method B. MEM produced the smallest average total error in parameter estimation; OEM yielded…
Descriptors: Computer Assisted Testing, Error of Measurement, Maximum Likelihood Statistics, Online Systems
Peer reviewed Peer reviewed
Camilli, Gregory; And Others – Applied Psychological Measurement, 1993
Three potential causes of scale shrinkage (measurement error, restriction of range, and multidimensionality) in item response theory vertical equating are discussed, and a more comprehensive model-based approach to establishing vertical scales is described. Test data from the National Assessment of Educational Progress are used to illustrate the…
Descriptors: Equated Scores, Error of Measurement, Item Response Theory, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Enders, Craig K. – Educational and Psychological Measurement, 2004
A method for incorporating maximum likelihood (ML) estimation into reliability analyses with item-level missing data is outlined. An ML estimate of the covariance matrix is first obtained using the expectation maximization (EM) algorithm, and coefficient alpha is subsequently computed using standard formulae. A simulation study demonstrated that…
Descriptors: Intervals, Simulation, Test Reliability, Computation
Peer reviewed Peer reviewed
Kim, Seock-Ho; And Others – Applied Psychological Measurement, 1994
Type I error rates of F. M. Lord's chi square test for differential item functioning were investigated using Monte Carlo simulations with marginal maximum likelihood estimation and marginal Bayesian estimation algorithms. Lord's chi square did not provide useful Type I error control for the three-parameter logistic model at these sample sizes.…
Descriptors: Algorithms, Bayesian Statistics, Chi Square, Error of Measurement
Ban, Jae-Chun; Hanson, Bradley A.; Yi, Qing; Harris, Deborah J. – 2002
The purpose of this study was to compare and evaluate three online pretest item calibration/scaling methods in terms of item parameter recovery when the item responses to the pretest items in the pool would be sparse. The three methods considered were the marginal maximum likelihood estimate with one EM cycle (OEM) method, the marginal maximum…
Descriptors: Adaptive Testing, Computer Assisted Testing, Data Analysis, Error of Measurement
De Ayala, R. J.; And Others – 1991
The robustness of a partial credit (PC) model-based computerized adaptive test's (CAT's) ability estimation to items that did not fit the PC model was investigated. A CAT program was written based on the PC model. The program used maximum likelihood estimation of ability. Item selection was on the basis of information. The simulation terminated…
Descriptors: Adaptive Testing, Computer Assisted Testing, Equations (Mathematics), Error of Measurement
Previous Page | Next Page ยป
Pages: 1  |  2