Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 5 |
Since 2006 (last 20 years) | 9 |
Descriptor
Computer Assisted Testing | 16 |
Item Response Theory | 16 |
Monte Carlo Methods | 16 |
Test Items | 8 |
Simulation | 7 |
Adaptive Testing | 6 |
Markov Processes | 4 |
Models | 4 |
Classification | 3 |
Ability | 2 |
Bayesian Statistics | 2 |
More ▼ |
Source
Author
Thompson, Nathan A. | 2 |
Allen, Nancy L. | 1 |
Armstrong, Ronald D. | 1 |
Belov, Dmitry I. | 1 |
Bradlow, Eric T. | 1 |
Donoghue, John R. | 1 |
Esther Ulitzsch | 1 |
Fox, Jean-Paul | 1 |
Hanif Akhtar | 1 |
Hornke, Lutz F. | 1 |
Hull, Michael M. | 1 |
More ▼ |
Publication Type
Journal Articles | 10 |
Reports - Evaluative | 8 |
Reports - Research | 8 |
Speeches/Meeting Papers | 5 |
Education Level
Early Childhood Education | 1 |
Elementary Education | 1 |
Grade 3 | 1 |
Higher Education | 1 |
Postsecondary Education | 1 |
Primary Education | 1 |
Secondary Education | 1 |
Audience
Location
Japan | 1 |
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of… | 2 |
Program for International… | 1 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Ince Araci, F. Gul; Tan, Seref – International Journal of Assessment Tools in Education, 2022
Computerized Adaptive Testing (CAT) is a beneficial test technique that decreases the number of items that need to be administered by taking items in accordance with individuals' own ability levels. After the CAT applications were constructed based on the unidimensional Item Response Theory (IRT), Multidimensional CAT (MCAT) applications have…
Descriptors: Adaptive Testing, Computer Assisted Testing, Simulation, Item Response Theory
Hanif Akhtar – International Society for Technology, Education, and Science, 2023
For efficiency, Computerized Adaptive Test (CAT) algorithm selects items with the maximum information, typically with a 50% probability of being answered correctly. However, examinees may not be satisfied if they only correctly answer 50% of the items. Researchers discovered that changing the item selection algorithms to choose easier items (i.e.,…
Descriptors: Success, Probability, Computer Assisted Testing, Adaptive Testing
Esther Ulitzsch; Steffi Pohl; Lale Khorramdel; Ulf Kroehne; Matthias von Davier – Journal of Educational and Behavioral Statistics, 2024
Questionnaires are by far the most common tool for measuring noncognitive constructs in psychology and educational sciences. Response bias may pose an additional source of variation between respondents that threatens validity of conclusions drawn from questionnaire data. We present a mixture modeling approach that leverages response time data from…
Descriptors: Item Response Theory, Response Style (Tests), Questionnaires, Secondary School Students
Joshua B. Gilbert; James S. Kim; Luke W. Miratrix – Annenberg Institute for School Reform at Brown University, 2022
Analyses that reveal how treatment effects vary allow researchers, practitioners, and policymakers to better understand the efficacy of educational interventions. In practice, however, standard statistical methods for addressing Heterogeneous Treatment Effects (HTE) fail to address the HTE that may exist within outcome measures. In this study, we…
Descriptors: Item Response Theory, Models, Formative Evaluation, Statistical Inference
Yasuda, Jun-ichiro; Mae, Naohiro; Hull, Michael M.; Taniguchi, Masa-aki – Physical Review Physics Education Research, 2021
As a method to shorten the test time of the Force Concept Inventory (FCI), we suggest the use of computerized adaptive testing (CAT). CAT is the process of administering a test on a computer, with items (i.e., questions) selected based upon the responses of the examinee to prior items. In so doing, the test length can be significantly shortened.…
Descriptors: Foreign Countries, College Students, Student Evaluation, Computer Assisted Testing
Thompson, Nathan A. – Practical Assessment, Research & Evaluation, 2011
Computerized classification testing (CCT) is an approach to designing tests with intelligent algorithms, similar to adaptive testing, but specifically designed for the purpose of classifying examinees into categories such as "pass" and "fail." Like adaptive testing for point estimation of ability, the key component is the…
Descriptors: Adaptive Testing, Computer Assisted Testing, Classification, Probability
Schmitt, T. A.; Sass, D. A.; Sullivan, J. R.; Walker, C. M. – International Journal of Testing, 2010
Imposed time limits on computer adaptive tests (CATs) can result in examinees having difficulty completing all items, thus compromising the validity and reliability of ability estimates. In this study, the effects of speededness were explored in a simulated CAT environment by varying examinee response patterns to end-of-test items. Expectedly,…
Descriptors: Monte Carlo Methods, Simulation, Computer Assisted Testing, Adaptive Testing
Thompson, Nathan A. – Educational and Psychological Measurement, 2009
Several alternatives for item selection algorithms based on item response theory in computerized classification testing (CCT) have been suggested, with no conclusive evidence on the substantial superiority of a single method. It is argued that the lack of sizable effect is because some of the methods actually assess items very similarly through…
Descriptors: Item Response Theory, Psychoeducational Methods, Cutting Scores, Simulation
Klein Entink, Rinke H.; Kuhn, Jorg-Tobias; Hornke, Lutz F.; Fox, Jean-Paul – Psychological Methods, 2009
In current psychological research, the analysis of data from computer-based assessments or experiments is often confined to accuracy scores. Response times, although being an important source of additional information, are either neglected or analyzed separately. In this article, a new model is developed that allows the simultaneous analysis of…
Descriptors: Psychological Studies, Monte Carlo Methods, Markov Processes, Educational Assessment
Lau, C. Allen; Wang, Tianyou – 1998
The purposes of this study were to: (1) extend the sequential probability ratio testing (SPRT) procedure to polytomous item response theory (IRT) models in computerized classification testing (CCT); (2) compare polytomous items with dichotomous items using the SPRT procedure for their accuracy and efficiency; (3) study a direct approach in…
Descriptors: Computer Assisted Testing, Cutting Scores, Item Response Theory, Mastery Tests
Lau, Che-Ming Allen; And Others – 1996
This study focused on the robustness of unidimensional item response theory (UIRT) models in computerized classification testing against violation of the unidimensionality assumption. The study addressed whether UIRT models remain acceptable under various testing conditions and dimensionality strengths. Monte Carlo simulation techniques were used…
Descriptors: Classification, Computer Assisted Testing, Educational Testing, Item Response Theory
Belov, Dmitry I.; Armstrong, Ronald D. – Applied Psychological Measurement, 2005
A new test assembly algorithm based on a Monte Carlo random search is presented in this article. A major advantage of the Monte Carlo test assembly over other approaches (integer programming or enumerative heuristics) is that it performs a uniform sampling from the item pool, which provides every feasible item combination (test) with an equal…
Descriptors: Item Banks, Computer Assisted Testing, Monte Carlo Methods, Evaluation Methods
Kim, Haeok; Plake, Barbara S. – 1993
A two-stage testing strategy is one method of adapting the difficulty of a test to an individual's ability level in an effort to achieve more precise measurement. A routing test provides an initial estimate of ability level, and a second-stage measurement test then evaluates the examinee further. The measurement accuracy and efficiency of item…
Descriptors: Ability, Adaptive Testing, Comparative Testing, Computer Assisted Testing
Wang, Xiaohui; Bradlow, Eric T.; Wainer, Howard – ETS Research Report Series, 2005
SCORIGHT is a very general computer program for scoring tests. It models tests that are made up of dichotomously or polytomously rated items or any kind of combination of the two through the use of a generalized item response theory (IRT) formulation. The items can be presented independently or grouped into clumps of allied items (testlets) or in…
Descriptors: Computer Assisted Testing, Statistical Analysis, Test Items, Bayesian Statistics
Allen, Nancy L.; Donoghue, John R. – 1995
This Monte Carlo study examined the effect of complex sampling of items on the measurement of differential item functioning (DIF) using the Mantel-Haenszel procedure. Data were generated using a three-parameter logistic item response theory model according to the balanced incomplete block (BIB) design used in the National Assessment of Educational…
Descriptors: Computer Assisted Testing, Difficulty Level, Elementary Secondary Education, Identification
Previous Page | Next Page ยป
Pages: 1 | 2