Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 10 |
Descriptor
Computer Assisted Testing | 12 |
Models | 12 |
Item Response Theory | 5 |
Test Items | 5 |
Adaptive Testing | 4 |
Bayesian Statistics | 4 |
Comparative Analysis | 4 |
Reaction Time | 4 |
Correlation | 3 |
Simulation | 3 |
Accuracy | 2 |
More ▼ |
Source
Journal of Educational and… | 12 |
Author
Publication Type
Journal Articles | 12 |
Reports - Research | 8 |
Reports - Descriptive | 3 |
Reports - Evaluative | 1 |
Education Level
Higher Education | 2 |
Postsecondary Education | 1 |
Audience
Researchers | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
Law School Admission Test | 1 |
What Works Clearinghouse Rating
Mark L. Davison; David J. Weiss; Joseph N. DeWeese; Ozge Ersan; Gina Biancarosa; Patrick C. Kennedy – Journal of Educational and Behavioral Statistics, 2023
A tree model for diagnostic educational testing is described along with Monte Carlo simulations designed to evaluate measurement accuracy based on the model. The model is implemented in an assessment of inferential reading comprehension, the Multiple-Choice Online Causal Comprehension Assessment (MOCCA), through a sequential, multidimensional,…
Descriptors: Cognitive Processes, Diagnostic Tests, Measurement, Accuracy
von Davier, Matthias; Khorramdel, Lale; He, Qiwei; Shin, Hyo Jeong; Chen, Haiwen – Journal of Educational and Behavioral Statistics, 2019
International large-scale assessments (ILSAs) transitioned from paper-based assessments to computer-based assessments (CBAs) facilitating the use of new item types and more effective data collection tools. This allows implementation of more complex test designs and to collect process and response time (RT) data. These new data types can be used to…
Descriptors: International Assessment, Computer Assisted Testing, Psychometrics, Item Response Theory
Wang, Shiyu; Yang, Yan; Culpepper, Steven Andrew; Douglas, Jeffrey A. – Journal of Educational and Behavioral Statistics, 2018
A family of learning models that integrates a cognitive diagnostic model and a higher-order, hidden Markov model in one framework is proposed. This new framework includes covariates to model skill transition in the learning environment. A Bayesian formulation is adopted to estimate parameters from a learning model. The developed methods are…
Descriptors: Skill Development, Cognitive Measurement, Cognitive Processes, Markov Processes
Marianti, Sukaesi; Fox, Jean-Paul; Avetisyan, Marianna; Veldkamp, Bernard P.; Tijmstra, Jesper – Journal of Educational and Behavioral Statistics, 2014
Many standardized tests are now administered via computer rather than paper-and-pencil format. In a computer-based testing environment, it is possible to record not only the test taker's response to each question (item) but also the amount of time spent by the test taker in considering and answering each item. Response times (RTs) provide…
Descriptors: Reaction Time, Response Style (Tests), Computer Assisted Testing, Bayesian Statistics
Nydick, Steven W. – Journal of Educational and Behavioral Statistics, 2014
The sequential probability ratio test (SPRT) is a common method for terminating item response theory (IRT)-based adaptive classification tests. To decide whether a classification test should stop, the SPRT compares a simple log-likelihood ratio, based on the classification bound separating two categories, to prespecified critical values. As has…
Descriptors: Probability, Item Response Theory, Models, Classification
Wang, Chun; Fan, Zhewen; Chang, Hua-Hua; Douglas, Jeffrey A. – Journal of Educational and Behavioral Statistics, 2013
The item response times (RTs) collected from computerized testing represent an underutilized type of information about items and examinees. In addition to knowing the examinees' responses to each item, we can investigate the amount of time examinees spend on each item. Current models for RTs mainly focus on parametric models, which have the…
Descriptors: Reaction Time, Computer Assisted Testing, Test Items, Accuracy
Wainer, Howard – Journal of Educational and Behavioral Statistics, 2010
In this essay, the author tries to look forward into the 21st century to divine three things: (i) What skills will researchers in the future need to solve the most pressing problems? (ii) What are some of the most likely candidates to be those problems? and (iii) What are some current areas of research that seem mined out and should not distract…
Descriptors: Research Skills, Researchers, Internet, Access to Information
van der Linden, Wim J. – Journal of Educational and Behavioral Statistics, 2009
A bivariate lognormal model for the distribution of the response times on a test by a pair of test takers is presented. As the model has parameters for the item effects on the response times, its correlation parameter automatically corrects for the spuriousness in the observed correlation between the response times of different test takers because…
Descriptors: Cheating, Models, Reaction Time, Correlation
Passos, Valeria Lima; Berger, Martijn P. F.; Tan, Frans E. S. – Journal of Educational and Behavioral Statistics, 2008
During the early stage of computerized adaptive testing (CAT), item selection criteria based on Fisher"s information often produce less stable latent trait estimates than the Kullback-Leibler global information criterion. Robustness against early stage instability has been reported for the D-optimality criterion in a polytomous CAT with the…
Descriptors: Computer Assisted Testing, Adaptive Testing, Evaluation Criteria, Item Analysis
Wiberg, Marie – Journal of Educational and Behavioral Statistics, 2003
A criterion-referenced computerized test is expressed as a statistical hypothesis problem. This admits that it can be studied by using the theory of optimal design. The power function of the statistical test is used as a criterion function when designing the test. A formal proof is provided showing that all items should have the same item…
Descriptors: Test Items, Computer Assisted Testing, Statistics, Validity
Segall, Daniel O. – Journal of Educational and Behavioral Statistics, 2004
A new sharing item response theory (SIRT) model is presented that explicitly models the effects of sharing item content between informants and test takers. This model is used to construct adaptive item selection and scoring rules that provide increased precision and reduced score gains in instances where sharing occurs. The adaptive item selection…
Descriptors: Scoring, Item Analysis, Item Response Theory, Adaptive Testing
van der Linden, Wim J.; Ariel, Adelaide; Veldkamp, Bernard P. – Journal of Educational and Behavioral Statistics, 2006
Test-item writing efforts typically results in item pools with an undesirable correlational structure between the content attributes of the items and their statistical information. If such pools are used in computerized adaptive testing (CAT), the algorithm may be forced to select items with less than optimal information, that violate the content…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Items, Item Banks