NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Bergner, Yoav; von Davier, Alina A. – Journal of Educational and Behavioral Statistics, 2019
This article reviews how National Assessment of Educational Progress (NAEP) has come to collect and analyze data about cognitive and behavioral processes (process data) in the transition to digital assessment technologies over the past two decades. An ordered five-level structure is proposed for describing the uses of process data. The levels in…
Descriptors: National Competency Tests, Data Collection, Data Analysis, Cognitive Processes
Peer reviewed Peer reviewed
Direct linkDirect link
von Davier, Matthias; Khorramdel, Lale; He, Qiwei; Shin, Hyo Jeong; Chen, Haiwen – Journal of Educational and Behavioral Statistics, 2019
International large-scale assessments (ILSAs) transitioned from paper-based assessments to computer-based assessments (CBAs) facilitating the use of new item types and more effective data collection tools. This allows implementation of more complex test designs and to collect process and response time (RT) data. These new data types can be used to…
Descriptors: International Assessment, Computer Assisted Testing, Psychometrics, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Thissen, David – Journal of Educational and Behavioral Statistics, 2016
David Thissen, a professor in the Department of Psychology and Neuroscience, Quantitative Program at the University of North Carolina, has consulted and served on technical advisory committees for assessment programs that use item response theory (IRT) over the past couple decades. He has come to the conclusion that there are usually two purposes…
Descriptors: Item Response Theory, Test Construction, Testing Problems, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Wainer, Howard – Journal of Educational and Behavioral Statistics, 2010
In this essay, the author tries to look forward into the 21st century to divine three things: (i) What skills will researchers in the future need to solve the most pressing problems? (ii) What are some of the most likely candidates to be those problems? and (iii) What are some current areas of research that seem mined out and should not distract…
Descriptors: Research Skills, Researchers, Internet, Access to Information
Peer reviewed Peer reviewed
van Krimpen-Stoop, Edith M. L. A.; Meijer, Rob R. – Journal of Educational and Behavioral Statistics, 2001
Proposed person-fit statistics that are designed for use in a computerized adaptive test (CAT) and derived critical values for these statistics using cumulative sum (CUSUM) procedures so that item-score patterns can be classified as fitting or misfitting. Compared nominal Type I errors with empirical Type I errors through simulation studies. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Simulation, Test Construction
Peer reviewed Peer reviewed
Wainer, Howard – Journal of Educational and Behavioral Statistics, 2000
Suggests that because of the nonlinear relationship between item usage and item security, the problems of test security posed by continuous administration of standardized tests cannot be resolved merely by increasing the size of the item pool. Offers alternative strategies to overcome these problems, distributing test items so as to avoid the…
Descriptors: Computer Assisted Testing, Standardized Tests, Test Items, Testing Problems
Peer reviewed Peer reviewed
Veerkamp, Wim J. J. – Journal of Educational and Behavioral Statistics, 2000
Showed how Taylor approximation can be used to generate a linear approximation to a logistic item characteristic curve and a linear ability estimator. Demonstrated how, for a specific simulation, this could result in the special case of a Robbins-Monro item selection procedure for adaptive testing. (SLD)
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Selection
Peer reviewed Peer reviewed
Direct linkDirect link
Yan, Duanli; Lewis, Charles; Stocking, Martha – Journal of Educational and Behavioral Statistics, 2004
It is unrealistic to suppose that standard item response theory (IRT) models will be appropriate for all the new and currently considered computer-based tests. In addition to developing new models, we also need to give attention to the possibility of constructing and analyzing new tests without the aid of strong models. Computerized adaptive…
Descriptors: Nonparametric Statistics, Regression (Statistics), Adaptive Testing, Computer Assisted Testing
Peer reviewed Peer reviewed
van der Linden, Wim J. – Journal of Educational and Behavioral Statistics, 1999
Proposes an algorithm that minimizes the asymptotic variance of the maximum-likelihood (ML) estimator of a linear combination of abilities of interest. The criterion results in a closed-form expression that is easy to evaluate. Also shows how the algorithm can be modified if the interest is in a test with a "simple ability structure."…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Segall, Daniel O. – Journal of Educational and Behavioral Statistics, 2004
A new sharing item response theory (SIRT) model is presented that explicitly models the effects of sharing item content between informants and test takers. This model is used to construct adaptive item selection and scoring rules that provide increased precision and reduced score gains in instances where sharing occurs. The adaptive item selection…
Descriptors: Scoring, Item Analysis, Item Response Theory, Adaptive Testing