Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 7 |
Since 2016 (last 10 years) | 26 |
Since 2006 (last 20 years) | 62 |
Descriptor
Comparative Analysis | 81 |
Computer Assisted Testing | 81 |
Item Response Theory | 81 |
Adaptive Testing | 39 |
Test Items | 32 |
Simulation | 25 |
Foreign Countries | 18 |
Item Analysis | 18 |
Test Format | 18 |
Scores | 17 |
Difficulty Level | 13 |
More ▼ |
Source
Author
Ueno, Maomi | 3 |
Weiss, David J. | 3 |
Bergstrom, Betty A. | 2 |
Chen, Li-Ju | 2 |
Choi, Seung W. | 2 |
Cohen, Allan S. | 2 |
Coniam, David | 2 |
De Ayala, R. J. | 2 |
Dodd, Barbara G. | 2 |
Feuerstahler, Leah M. | 2 |
Finkelman, Matthew D. | 2 |
More ▼ |
Publication Type
Education Level
Audience
Practitioners | 1 |
Researchers | 1 |
Students | 1 |
Location
United Kingdom | 3 |
China | 2 |
France | 2 |
Hong Kong | 2 |
Netherlands | 2 |
Taiwan | 2 |
Turkey | 2 |
Arkansas | 1 |
Australia | 1 |
Colorado | 1 |
Cyprus | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 2 |
ACT Assessment | 1 |
Advanced Placement… | 1 |
Center for Epidemiologic… | 1 |
College Board Achievement… | 1 |
Defining Issues Test | 1 |
Indiana Statewide Testing for… | 1 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Falk, Carl F.; Feuerstahler, Leah M. – Educational and Psychological Measurement, 2022
Large-scale assessments often use a computer adaptive test (CAT) for selection of items and for scoring respondents. Such tests often assume a parametric form for the relationship between item responses and the underlying construct. Although semi- and nonparametric response functions could be used, there is scant research on their performance in a…
Descriptors: Item Response Theory, Adaptive Testing, Computer Assisted Testing, Nonparametric Statistics
Markus T. Jansen; Ralf Schulze – Educational and Psychological Measurement, 2024
Thurstonian forced-choice modeling is considered to be a powerful new tool to estimate item and person parameters while simultaneously testing the model fit. This assessment approach is associated with the aim of reducing faking and other response tendencies that plague traditional self-report trait assessments. As a result of major recent…
Descriptors: Factor Analysis, Models, Item Analysis, Evaluation Methods
Fuchimoto, Kazuma; Ishii, Takatoshi; Ueno, Maomi – IEEE Transactions on Learning Technologies, 2022
Educational assessments often require uniform test forms, for which each test form has equivalent measurement accuracy but with a different set of items. For uniform test assembly, an important issue is the increase of the number of assembled uniform tests. Although many automatic uniform test assembly methods exist, the maximum clique algorithm…
Descriptors: Simulation, Efficiency, Test Items, Educational Assessment
Benton, Tom – Research Matters, 2021
Computer adaptive testing is intended to make assessment more reliable by tailoring the difficulty of the questions a student has to answer to their level of ability. Most commonly, this benefit is used to justify the length of tests being shortened whilst retaining the reliability of a longer, non-adaptive test. Improvements due to adaptive…
Descriptors: Risk, Item Response Theory, Computer Assisted Testing, Difficulty Level
Gorney, Kylie; Wollack, James A. – Practical Assessment, Research & Evaluation, 2022
Unlike the traditional multiple-choice (MC) format, the discrete-option multiple-choice (DOMC) format does not necessarily reveal all answer options to an examinee. The purpose of this study was to determine whether the reduced exposure of item content affects test security. We conducted an experiment in which participants were allowed to view…
Descriptors: Test Items, Test Format, Multiple Choice Tests, Item Analysis
Feuerstahler, Leah M.; Waller, Niels; MacDonald, Angus, III – Educational and Psychological Measurement, 2020
Although item response models have grown in popularity in many areas of educational and psychological assessment, there are relatively few applications of these models in experimental psychopathology. In this article, we explore the use of item response models in the context of a computerized cognitive task designed to assess visual working memory…
Descriptors: Item Response Theory, Psychopathology, Intelligence Tests, Psychological Evaluation
von Davier, Matthias; Khorramdel, Lale; He, Qiwei; Shin, Hyo Jeong; Chen, Haiwen – Journal of Educational and Behavioral Statistics, 2019
International large-scale assessments (ILSAs) transitioned from paper-based assessments to computer-based assessments (CBAs) facilitating the use of new item types and more effective data collection tools. This allows implementation of more complex test designs and to collect process and response time (RT) data. These new data types can be used to…
Descriptors: International Assessment, Computer Assisted Testing, Psychometrics, Item Response Theory
Cikrikci, Nukhet; Yalcin, Seher; Kalender, Ilker; Gul, Emrah; Ayan, Cansu; Uyumaz, Gizem; Sahin-Kursad, Merve; Kamis, Omer – International Journal of Assessment Tools in Education, 2020
This study tested the applicability of the theoretical Examination for Candidates of Driving License (ECODL) in Turkey as a computerized adaptive test (CAT). Firstly, various simulation conditions were tested for the live CAT through an item response theory-based calibrated item bank. The application of the simulated CAT was based on data from…
Descriptors: Motor Vehicles, Traffic Safety, Computer Assisted Testing, Item Response Theory
Scoular, Claire; Eleftheriadou, Sofia; Ramalingam, Dara; Cloney, Dan – Australian Journal of Education, 2020
Collaboration is a complex skill, comprised of multiple subskills, that is of growing interest to policy makers, educators and researchers. Several definitions and frameworks have been described in the literature to support assessment of collaboration; however, the inherent structure of the construct still needs better definition. In 2015, the…
Descriptors: Cooperative Learning, Problem Solving, Computer Assisted Testing, Comparative Analysis
Kim, Ahyoung Alicia; Tywoniw, Rurik L.; Chapman, Mark – Language Assessment Quarterly, 2022
Technology-enhanced items (TEIs) are innovative, computer-delivered test items that allow test takers to better interact with the test environment compared to traditional multiple-choice items (MCIs). The interactive nature of TEIs offer improved construct coverage compared with MCIs but little research exists regarding students' performance on…
Descriptors: Language Tests, Test Items, Computer Assisted Testing, English (Second Language)
Yoshioka, Sérgio R. I.; Ishitani, Lucila – Informatics in Education, 2018
Computerized Adaptive Testing (CAT) is now widely used. However, inserting new items into the question bank of a CAT requires a great effort that makes impractical the wide application of CAT in classroom teaching. One solution would be to use the tacit knowledge of the teachers or experts for a pre-classification and calibrate during the…
Descriptors: Student Motivation, Adaptive Testing, Computer Assisted Testing, Item Response Theory
Storme, Martin; Myszkowski, Nils; Baron, Simon; Bernard, David – Journal of Intelligence, 2019
Assessing job applicants' general mental ability online poses psychometric challenges due to the necessity of having brief but accurate tests. Recent research (Myszkowski & Storme, 2018) suggests that recovering distractor information through Nested Logit Models (NLM; Suh & Bolt, 2010) increases the reliability of ability estimates in…
Descriptors: Intelligence Tests, Item Response Theory, Comparative Analysis, Test Reliability
Zeng, Ji; Yin, Ping; Shedden, Kerby A. – Educational and Psychological Measurement, 2015
This article provides a brief overview and comparison of three matching approaches in forming comparable groups for a study comparing test administration modes (i.e., computer-based tests [CBT] and paper-and-pencil tests [PPT]): (a) a propensity score matching approach proposed in this article, (b) the propensity score matching approach used by…
Descriptors: Comparative Analysis, Computer Assisted Testing, Probability, Classification
Wang, Keyin – ProQuest LLC, 2017
The comparison of item-level computerized adaptive testing (CAT) and multistage adaptive testing (MST) has been researched extensively (e.g., Kim & Plake, 1993; Luecht et al., 1996; Patsula, 1999; Jodoin, 2003; Hambleton & Xing, 2006; Keng, 2008; Zheng, 2012). Various CAT and MST designs have been investigated and compared under the same…
Descriptors: Comparative Analysis, Computer Assisted Testing, Adaptive Testing, Test Items
Mao, Xiuzhen; Ozdemir, Burhanettin; Wang, Yating; Xiu, Tao – Online Submission, 2016
Four item selection indexes with and without exposure control are evaluated and compared in multidimensional computerized adaptive testing (CAT). The four item selection indices are D-optimality, Posterior expectation Kullback-Leibler information (KLP), the minimized error variance of the linear combination score with equal weight (V1), and the…
Descriptors: Comparative Analysis, Adaptive Testing, Computer Assisted Testing, Test Items