Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 7 |
Descriptor
Computation | 9 |
Computer Assisted Testing | 9 |
Monte Carlo Methods | 9 |
Adaptive Testing | 5 |
Test Items | 5 |
Statistical Analysis | 4 |
Ability | 3 |
Accuracy | 2 |
Bayesian Statistics | 2 |
Comparative Analysis | 2 |
Item Response Theory | 2 |
More ▼ |
Source
Author
Sahin, Alper | 2 |
Armstrong, Ronald D. | 1 |
Belov, Dmitry I. | 1 |
Bradlow, Eric T. | 1 |
Busemeyer, Jerome R. | 1 |
Chang, Hua-Hua | 1 |
Douglas, Jeffrey A. | 1 |
Enders, Craig K | 1 |
Fan, Zhewen | 1 |
Johnson, Joseph G. | 1 |
Massidda, D. | 1 |
More ▼ |
Publication Type
Journal Articles | 9 |
Reports - Research | 5 |
Reports - Descriptive | 3 |
Reports - Evaluative | 1 |
Education Level
Higher Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Sahin, Alper; Ozbasi, Durmus – Eurasian Journal of Educational Research, 2017
Purpose: This study aims to reveal effects of content balancing and item selection method on ability estimation in computerized adaptive tests by comparing Fisher's maximum information (FMI) and likelihood weighted information (LWI) methods. Research Methods: Four groups of examinees (250, 500, 750, 1000) and a bank of 500 items with 10 different…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Test Content
Sahin, Alper; Weiss, David J. – Educational Sciences: Theory and Practice, 2015
This study aimed to investigate the effects of calibration sample size and item bank size on examinee ability estimation in computerized adaptive testing (CAT). For this purpose, a 500-item bank pre-calibrated using the three-parameter logistic model with 10,000 examinees was simulated. Calibration samples of varying sizes (150, 250, 350, 500,…
Descriptors: Adaptive Testing, Computer Assisted Testing, Sample Size, Item Banks
Wang, Chun; Fan, Zhewen; Chang, Hua-Hua; Douglas, Jeffrey A. – Journal of Educational and Behavioral Statistics, 2013
The item response times (RTs) collected from computerized testing represent an underutilized type of information about items and examinees. In addition to knowing the examinees' responses to each item, we can investigate the amount of time examinees spend on each item. Current models for RTs mainly focus on parametric models, which have the…
Descriptors: Reaction Time, Computer Assisted Testing, Test Items, Accuracy
Schmitt, T. A.; Sass, D. A.; Sullivan, J. R.; Walker, C. M. – International Journal of Testing, 2010
Imposed time limits on computer adaptive tests (CATs) can result in examinees having difficulty completing all items, thus compromising the validity and reliability of ability estimates. In this study, the effects of speededness were explored in a simulated CAT environment by varying examinee response patterns to end-of-test items. Expectedly,…
Descriptors: Monte Carlo Methods, Simulation, Computer Assisted Testing, Adaptive Testing
Vidotto, G.; Massidda, D.; Noventa, S. – Psicologica: International Journal of Methodology and Experimental Psychology, 2010
The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…
Descriptors: Interaction, Computation, Computer Assisted Testing, Computer Software
Belov, Dmitry I.; Armstrong, Ronald D.; Weissman, Alexander – Applied Psychological Measurement, 2008
This article presents a new algorithm for computerized adaptive testing (CAT) when content constraints are present. The algorithm is based on shadow CAT methodology to meet content constraints but applies Monte Carlo methods and provides the following advantages over shadow CAT: (a) lower maximum item exposure rates, (b) higher utilization of the…
Descriptors: Test Items, Monte Carlo Methods, Law Schools, Adaptive Testing
Nietfeld, John L.; Enders, Craig K; Schraw, Gregory – Educational and Psychological Measurement, 2006
Researchers studying monitoring accuracy currently use two different indexes to estimate accuracy: relative accuracy and absolute accuracy. The authors compared the distributional properties of two measures of monitoring accuracy using Monte Carlo procedures that fit within these categories. They manipulated the accuracy of judgments (i.e., chance…
Descriptors: Monte Carlo Methods, Test Items, Computation, Metacognition
Johnson, Joseph G.; Busemeyer, Jerome R. – Psychological Review, 2005
Preference orderings among a set of options may depend on the elicitation method (e.g., choice or pricing); these preference reversals challenge traditional decision theories. Previous attempts to explain these reversals have relied on allowing utility of the options to change across elicitation methods by changing the decision weights, the…
Descriptors: Adaptive Testing, Computer Assisted Testing, Decision Making, Stimulation
Wang, Xiaohui; Bradlow, Eric T.; Wainer, Howard – ETS Research Report Series, 2005
SCORIGHT is a very general computer program for scoring tests. It models tests that are made up of dichotomously or polytomously rated items or any kind of combination of the two through the use of a generalized item response theory (IRT) formulation. The items can be presented independently or grouped into clumps of allied items (testlets) or in…
Descriptors: Computer Assisted Testing, Statistical Analysis, Test Items, Bayesian Statistics