NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Joo, Seang-Hwane; Ferron, John M.; Moeyaert, Mariola; Beretvas, S. Natasha; Van den Noortgate, Wim – Journal of Experimental Education, 2019
Multilevel modeling has been utilized for combining single-case experimental design (SCED) data assuming simple level-1 error structures. The purpose of this study is to compare various multilevel analysis approaches for handling potential complexity in the level-1 error structure within SCED data, including approaches assuming simple and complex…
Descriptors: Hierarchical Linear Modeling, Synthesis, Data Analysis, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Park, Sung Eun; Ahn, Soyeon; Zopluoglu, Cengiz – Educational and Psychological Measurement, 2021
This study presents a new approach to synthesizing differential item functioning (DIF) effect size: First, using correlation matrices from each study, we perform a multigroup confirmatory factor analysis (MGCFA) that examines measurement invariance of a test item between two subgroups (i.e., focal and reference groups). Then we synthesize, across…
Descriptors: Item Analysis, Effect Size, Difficulty Level, Monte Carlo Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kim, Sooyeon; Moses, Tim; Yoo, Hanwook Henry – ETS Research Report Series, 2015
The purpose of this inquiry was to investigate the effectiveness of item response theory (IRT) proficiency estimators in terms of estimation bias and error under multistage testing (MST). We chose a 2-stage MST design in which 1 adaptation to the examinees' ability levels takes place. It includes 4 modules (1 at Stage 1, 3 at Stage 2) and 3 paths…
Descriptors: Item Response Theory, Computation, Statistical Bias, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Paek, Insu; Cai, Li – Educational and Psychological Measurement, 2014
The present study was motivated by the recognition that standard errors (SEs) of item response theory (IRT) model parameters are often of immediate interest to practitioners and that there is currently a lack of comparative research on different SE (or error variance-covariance matrix) estimation procedures. The present study investigated item…
Descriptors: Item Response Theory, Comparative Analysis, Error of Measurement, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Jinghua; Sinharay, Sandip; Holland, Paul; Feigenbaum, Miriam; Curley, Edward – Educational and Psychological Measurement, 2011
Two different types of anchors are investigated in this study: a mini-version anchor and an anchor that has a less spread of difficulty than the tests to be equated. The latter is referred to as a midi anchor. The impact of these two different types of anchors on observed score equating are evaluated and compared with respect to systematic error…
Descriptors: Equated Scores, Test Items, Difficulty Level, Statistical Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Jiao, Hong; Kamata, Akihito; Wang, Shudong; Jin, Ying – Journal of Educational Measurement, 2012
The applications of item response theory (IRT) models assume local item independence and that examinees are independent of each other. When a representative sample for psychometric analysis is selected using a cluster sampling method in a testlet-based assessment, both local item dependence and local person dependence are likely to be induced.…
Descriptors: Item Response Theory, Test Items, Markov Processes, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
DeMars, Christine E. – Structural Equation Modeling: A Multidisciplinary Journal, 2012
In structural equation modeling software, either limited-information (bivariate proportions) or full-information item parameter estimation routines could be used for the 2-parameter item response theory (IRT) model. Limited-information methods assume the continuous variable underlying an item response is normally distributed. For skewed and…
Descriptors: Item Response Theory, Structural Equation Models, Computation, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, Holmes – Applied Psychological Measurement, 2011
Estimation of multidimensional item response theory (MIRT) model parameters can be carried out using the normal ogive with unweighted least squares estimation with the normal-ogive harmonic analysis robust method (NOHARM) software. Previous simulation research has demonstrated that this approach does yield accurate and efficient estimates of item…
Descriptors: Item Response Theory, Computation, Test Items, Simulation
Peer reviewed Peer reviewed
Camilli, Gregory – Journal of Educational Statistics, 1988
The phenomenon of scale shrinkage is examined. Focus is on the pattern of decreasing variances in item response theory scale scores from fall to spring within a grade. It is demonstrated that questions concerning population distributions of true ability can be addressed with empirical Bayes techniques. (TJH)
Descriptors: Academic Ability, Achievement Tests, Bayesian Statistics, Difficulty Level
Patience, Wayne M.; Reckase, Mark D. – 1979
An experiment was performed with computer-generated data to investigate some of the operational characteristics of tailored testing as they are related to various provisions of the computer program and item pool. With respect to the computer program, two characteristics were varied: the size of the step of increase or decrease in item difficulty…
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Error of Measurement