NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
Laws, Policies, & Programs
Individuals with Disabilities…8
What Works Clearinghouse Rating
Showing 1 to 15 of 131 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Xu, Lingling; Wang, Shiyu; Cai, Yan; Tu, Dongbo – Journal of Educational Measurement, 2021
Designing a multidimensional adaptive test (M-MST) based on a multidimensional item response theory (MIRT) model is critical to make full use of the advantages of both MST and MIRT in implementing multidimensional assessments. This study proposed two types of automated test assembly (ATA) algorithms and one set of routing rules that can facilitate…
Descriptors: Item Response Theory, Adaptive Testing, Automation, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Lin, Yin; Brown, Anna; Williams, Paul – Educational and Psychological Measurement, 2023
Several forced-choice (FC) computerized adaptive tests (CATs) have emerged in the field of organizational psychology, all of them employing ideal-point items. However, despite most items developed historically follow dominance response models, research on FC CAT using dominance items is limited. Existing research is heavily dominated by…
Descriptors: Measurement Techniques, Computer Assisted Testing, Adaptive Testing, Industrial Psychology
Peer reviewed Peer reviewed
Direct linkDirect link
Xuelan Qiu; Jimmy de la Torre; You-Gan Wang; Jinran Wu – Educational Measurement: Issues and Practice, 2024
Multidimensional forced-choice (MFC) items have been found to be useful to reduce response biases in personality assessments. However, conventional scoring methods for the MFC items result in ipsative data, hindering the wider applications of the MFC format. In the last decade, a number of item response theory (IRT) models have been developed,…
Descriptors: Item Response Theory, Personality Traits, Personality Measures, Personality Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Yang, Lihong; Reckase, Mark D. – Educational and Psychological Measurement, 2020
The present study extended the "p"-optimality method to the multistage computerized adaptive test (MST) context in developing optimal item pools to support different MST panel designs under different test configurations. Using the Rasch model, simulated optimal item pools were generated with and without practical constraints of exposure…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Hogenboom, Sally A. M.; Hermans, Felienne F. J.; Van der Maas, Han L. J. – Computer Science Education, 2022
Background and Context: Valid assessment of understanding of programming concepts in primary school children is essential to implement and improve programming education. Objective: We developed and validated the Computerized Adaptive Programming Concepts Test (CAPCT) with a novel application of Item Response Theory. The CAPCT is a web-based and…
Descriptors: Computer Assisted Testing, Adaptive Testing, Programming, Knowledge Level
Peer reviewed Peer reviewed
Direct linkDirect link
Giada Spaccapanico Proietti; Mariagiulia Matteucci; Stefania Mignani; Bernard P. Veldkamp – Journal of Educational and Behavioral Statistics, 2024
Classical automated test assembly (ATA) methods assume fixed and known coefficients for the constraints and the objective function. This hypothesis is not true for the estimates of item response theory parameters, which are crucial elements in test assembly classical models. To account for uncertainty in ATA, we propose a chance-constrained…
Descriptors: Automation, Computer Assisted Testing, Ambiguity (Context), Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Clements, Douglas H.; Banse, Holland; Sarama, Julie; Tatsuoka, Curtis; Joswick, Candace; Hudyma, Aaron; Van Dine, Douglas W.; Tatsuoka, Kikumi K. – Mathematical Thinking and Learning: An International Journal, 2022
Researchers often develop instruments using correctness scores (and a variety of theories and techniques, such as Item Response Theory) for validation and scoring. Less frequently, observations of children's strategies are incorporated into the design, development, and application of assessments. We conducted individual interviews of 833…
Descriptors: Item Response Theory, Computer Assisted Testing, Test Items, Mathematics Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sahin, Melek Gulsah – International Journal of Assessment Tools in Education, 2020
Computer Adaptive Multistage Testing (ca-MST), which take the advantage of computer technology and adaptive test form, are widely used, and are now a popular issue of assessment and evaluation. This study aims at analyzing the effect of different panel designs, module lengths, and different sequence of a parameter value across stages and change in…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Suzumura, Nana – Language Assessment Quarterly, 2022
The present study is part of a larger mixed methods project that investigated the speaking section of the Advanced Placement (AP) Japanese Language and Culture Exam. It investigated assumptions for the evaluation inference through a content analysis of test taker responses. Results of the content analysis were integrated with those of a many-facet…
Descriptors: Content Analysis, Test Wiseness, Advanced Placement, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Koch, Marco; Spinath, Frank M.; Greiff, Samuel; Becker, Nicolas – Journal of Intelligence, 2022
Figural matrices tasks are one of the most prominent item formats used in intelligence tests, and their relevance for the assessment of cognitive abilities is unquestionable. However, despite endeavors of the open science movement to make scientific research accessible on all levels, there is a lack of royalty-free figural matrices tests. The Open…
Descriptors: Intelligence, Intelligence Tests, Computer Assisted Testing, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Rafatbakhsh, Elaheh; Ahmadi, Alireza; Moloodi, Amirsaeid; Mehrpour, Saeed – Educational Measurement: Issues and Practice, 2021
Test development is a crucial, yet difficult and time-consuming part of any educational system, and the task often falls all on teachers. Automatic item generation systems have recently drawn attention as they can reduce this burden and make test development more convenient. Such systems have been developed to generate items for vocabulary,…
Descriptors: Test Construction, Test Items, Computer Assisted Testing, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Ponce, Héctor R.; Mayer, Richard E.; Loyola, María Soledad – Journal of Educational Computing Research, 2021
One of the most common technology-enhanced items used in large-scale K-12 testing programs is the drag-and-drop response interaction. The main research questions in this study are: (a) Does adding a drag-and-drop interface to an online test affect the accuracy of student performance? (b) Does adding a drag-and-drop interface to an online test…
Descriptors: Computer Assisted Testing, Test Construction, Standardized Tests, Elementary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Heng-Tsung Danny; Hung, Shao-Ting Alan; Chao, Hsiu-Yi; Chen, Jyun-Hong; Lin, Tsui-Peng; Shih, Ching-Lin – Language Assessment Quarterly, 2022
Prompted by Taiwanese university students' increasing demand for English proficiency assessment, the absence of a test designed specifically for this demographic subgroup, and the lack of a localized and freely-accessible proficiency measure, this project set out to develop and validate a computerized adaptive English proficiency testing (E-CAT)…
Descriptors: Computer Assisted Testing, English (Second Language), Second Language Learning, Second Language Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Joo, Seang-Hwane; Lee, Philseok; Stark, Stephen – Journal of Educational Measurement, 2018
This research derived information functions and proposed new scalar information indices to examine the quality of multidimensional forced choice (MFC) items based on the RANK model. We also explored how GGUM-RANK information, latent trait recovery, and reliability varied across three MFC formats: pairs (two response alternatives), triplets (three…
Descriptors: Item Response Theory, Models, Item Analysis, Reliability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sahin, Murat Dogan; Gelbal, Selahattin – International Journal of Assessment Tools in Education, 2020
The purpose of this study was to conduct a real-time multidimensional computerized adaptive test (MCAT) using data from a previous paper-pencil test (PPT) regarding the grammar and vocabulary dimensions of an end-of-term proficiency exam conducted on students in a preparatory class at a university. An item pool was established through four…
Descriptors: Adaptive Testing, Computer Assisted Testing, Language Tests, Language Proficiency
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9