Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 12 |
Descriptor
Comparative Analysis | 20 |
Computer Assisted Testing | 20 |
Adaptive Testing | 11 |
Test Items | 8 |
Item Response Theory | 6 |
Simulation | 6 |
Test Construction | 4 |
Test Format | 4 |
Computer Simulation | 3 |
Scores | 3 |
Scoring | 3 |
More ▼ |
Source
Journal of Educational… | 20 |
Author
Chang, Hua-Hua | 2 |
Li, Jie | 2 |
Ansley, Timothy | 1 |
Bejar, Isaac I. | 1 |
Borglum, Joshua | 1 |
Cai, Yan | 1 |
Chen, Shu-Ying | 1 |
Choi, Seung W. | 1 |
Clauser, Brian E. | 1 |
Clyman, Stephen G. | 1 |
De Ayala, R. J. | 1 |
More ▼ |
Publication Type
Journal Articles | 20 |
Reports - Research | 14 |
Reports - Evaluative | 4 |
Information Analyses | 1 |
Reports - Descriptive | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Indiana Statewide Testing for… | 1 |
What Works Clearinghouse Rating
Jones, Paul; Tong, Ye; Liu, Jinghua; Borglum, Joshua; Primoli, Vince – Journal of Educational Measurement, 2022
This article studied two methods to detect mode effects in two credentialing exams. In Study 1, we used a "modal scale comparison approach," where the same pool of items was calibrated separately, without transformation, within two TC cohorts (TC1 and TC2) and one OP cohort (OP1) matched on their pool-based scale score distributions. The…
Descriptors: Scores, Credentials, Licensing Examinations (Professions), Computer Assisted Testing
Luo, Xiao; Kim, Doyoung – Journal of Educational Measurement, 2018
The top-down approach to designing a multistage test is relatively understudied in the literature and underused in research and practice. This study introduced a route-based top-down design approach that directly sets design parameters at the test level and utilizes the advanced automated test assembly algorithm seeking global optimality. The…
Descriptors: Computer Assisted Testing, Test Construction, Decision Making, Simulation
Li, Jie; van der Linden, Wim J. – Journal of Educational Measurement, 2018
The final step of the typical process of developing educational and psychological tests is to place the selected test items in a formatted form. The step involves the grouping and ordering of the items to meet a variety of formatting constraints. As this activity tends to be time-intensive, the use of mixed-integer programming (MIP) has been…
Descriptors: Programming, Automation, Test Items, Test Format
Liu, Shuchang; Cai, Yan; Tu, Dongbo – Journal of Educational Measurement, 2018
This study applied the mode of on-the-fly assembled multistage adaptive testing to cognitive diagnosis (CD-OMST). Several and several module assembly methods for CD-OMST were proposed and compared in terms of measurement precision, test security, and constrain management. The module assembly methods in the study included the maximum priority index…
Descriptors: Adaptive Testing, Monte Carlo Methods, Computer Security, Clinical Diagnosis
Shermis, Mark D.; Lottridge, Sue; Mayfield, Elijah – Journal of Educational Measurement, 2015
This study investigated the impact of anonymizing text on predicted scores made by two kinds of automated scoring engines: one that incorporates elements of natural language processing (NLP) and one that does not. Eight data sets (N = 22,029) were used to form both training and test sets in which the scoring engines had access to both text and…
Descriptors: Scoring, Essays, Computer Assisted Testing, Natural Language Processing
Veldkamp, Bernard P. – Journal of Educational Measurement, 2016
Many standardized tests are now administered via computer rather than paper-and-pencil format. The computer-based delivery mode brings with it certain advantages. One advantage is the ability to adapt the difficulty level of the test to the ability level of the test taker in what has been termed computerized adaptive testing (CAT). A second…
Descriptors: Computer Assisted Testing, Reaction Time, Standardized Tests, Difficulty Level
Yao, Lihua – Journal of Educational Measurement, 2014
The intent of this research was to find an item selection procedure in the multidimensional computer adaptive testing (CAT) framework that yielded higher precision for both the domain and composite abilities, had a higher usage of the item pool, and controlled the exposure rate. Five multidimensional CAT item selection procedures (minimum angle;…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection
Sinharay, Sandip; Wan, Ping; Choi, Seung W.; Kim, Dong-In – Journal of Educational Measurement, 2015
With an increase in the number of online tests, the number of interruptions during testing due to unexpected technical issues seems to be on the rise. For example, interruptions occurred during several recent state tests. When interruptions occur, it is important to determine the extent of their impact on the examinees' scores. Researchers such as…
Descriptors: Computer Assisted Testing, Testing Problems, Scores, Statistical Analysis
Zhang, Jinming; Li, Jie – Journal of Educational Measurement, 2016
An IRT-based sequential procedure is developed to monitor items for enhancing test security. The procedure uses a series of statistical hypothesis tests to examine whether the statistical characteristics of each item under inspection have changed significantly during CAT administration. This procedure is compared with a previously developed…
Descriptors: Computer Assisted Testing, Test Items, Difficulty Level, Item Response Theory
Wang, Chun; Chang, Hua-Hua; Huebner, Alan – Journal of Educational Measurement, 2011
This paper proposes two new item selection methods for cognitive diagnostic computerized adaptive testing: the restrictive progressive method and the restrictive threshold method. They are built upon the posterior weighted Kullback-Leibler (KL) information index but include additional stochastic components either in the item selection index or in…
Descriptors: Test Items, Adaptive Testing, Computer Assisted Testing, Cognitive Tests
Deng, Hui; Ansley, Timothy; Chang, Hua-Hua – Journal of Educational Measurement, 2010
In this study we evaluated and compared three item selection procedures: the maximum Fisher information procedure (F), the a-stratified multistage computer adaptive testing (CAT) (STR), and a refined stratification procedure that allows more items to be selected from the high a strata and fewer items from the low a strata (USTR), along with…
Descriptors: Computer Assisted Testing, Adaptive Testing, Selection, Methods

Revuelta, Javier; Ponsoda, Vicente – Journal of Educational Measurement, 1998
Proposes two new methods for item-exposure control, the Progressive method and the Restricted Maximum Information method. Compares both methods with six other item-selection methods. Discusses advantages of the two new methods and the usefulness of combining them. (SLD)
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Selection

Williamson, David M.; Bejar, Isaac I.; Hone, Anne S. – Journal of Educational Measurement, 1999
Contrasts "mental models" used by automated scoring for the simulation division of the computerized Architect Registration Examination with those used by experienced human graders for 3,613 candidate solutions. Discusses differences in the models used and the potential of automated scoring to enhance the validity evidence of scores. (SLD)
Descriptors: Architects, Comparative Analysis, Computer Assisted Testing, Judges

Clauser, Brian E.; Margolis, Melissa J.; Clyman, Stephen G.; Ross, Linette P. – Journal of Educational Measurement, 1997
Research on automated scoring is extended by comparing alternative automated systems for scoring a computer simulation of physicians' patient management skills. A regression-based system is more highly correlated with experts' evaluations than a system that uses complex rules to map performances into score levels, but both approaches are feasible.…
Descriptors: Algorithms, Automation, Comparative Analysis, Computer Assisted Testing
Lei, Pui-Wa; Chen, Shu-Ying; Yu, Lan – Journal of Educational Measurement, 2006
Mantel-Haenszel and SIBTEST, which have known difficulty in detecting non-unidirectional differential item functioning (DIF), have been adapted with some success for computerized adaptive testing (CAT). This study adapts logistic regression (LR) and the item-response-theory-likelihood-ratio test (IRT-LRT), capable of detecting both unidirectional…
Descriptors: Evaluation Methods, Test Bias, Computer Assisted Testing, Multiple Regression Analysis
Previous Page | Next Page ยป
Pages: 1 | 2