Publication Date
In 2025 | 1 |
Since 2024 | 4 |
Since 2021 (last 5 years) | 11 |
Since 2016 (last 10 years) | 25 |
Since 2006 (last 20 years) | 95 |
Descriptor
Computer Assisted Testing | 253 |
Test Items | 253 |
Adaptive Testing | 141 |
Test Construction | 99 |
Item Response Theory | 74 |
Item Banks | 62 |
Simulation | 50 |
Item Analysis | 34 |
Scoring | 34 |
Selection | 34 |
Estimation (Mathematics) | 33 |
More ▼ |
Source
Author
Publication Type
Education Level
Higher Education | 16 |
Elementary Education | 15 |
Elementary Secondary Education | 15 |
Postsecondary Education | 10 |
Secondary Education | 8 |
Grade 5 | 5 |
Grade 8 | 5 |
Middle Schools | 5 |
High Schools | 4 |
Grade 6 | 3 |
Grade 9 | 3 |
More ▼ |
Audience
Practitioners | 1 |
Researchers | 1 |
Location
Oregon | 8 |
Taiwan | 6 |
Netherlands | 3 |
Spain | 3 |
Australia | 2 |
Canada | 2 |
Denmark | 2 |
Nebraska | 2 |
United States | 2 |
Austria | 1 |
Belgium | 1 |
More ▼ |
Laws, Policies, & Programs
Individuals with Disabilities… | 8 |
Assessments and Surveys
What Works Clearinghouse Rating
Semere Kiros Bitew; Amir Hadifar; Lucas Sterckx; Johannes Deleu; Chris Develder; Thomas Demeester – IEEE Transactions on Learning Technologies, 2024
Multiple-choice questions (MCQs) are widely used in digital learning systems, as they allow for automating the assessment process. However, owing to the increased digital literacy of students and the advent of social media platforms, MCQ tests are widely shared online, and teachers are continuously challenged to create new questions, which is an…
Descriptors: Multiple Choice Tests, Computer Assisted Testing, Test Construction, Test Items
Aditya Shah; Ajay Devmane; Mehul Ranka; Prathamesh Churi – Education and Information Technologies, 2024
Online learning has grown due to the advancement of technology and flexibility. Online examinations measure students' knowledge and skills. Traditional question papers include inconsistent difficulty levels, arbitrary question allocations, and poor grading. The suggested model calibrates question paper difficulty based on student performance to…
Descriptors: Computer Assisted Testing, Difficulty Level, Grading, Test Construction
Qiao, Chen; Hu, Xiao – IEEE Transactions on Learning Technologies, 2023
Free text answers to short questions can reflect students' mastery of concepts and their relationships relevant to learning objectives. However, automating the assessment of free text answers has been challenging due to the complexity of natural language. Existing studies often predict the scores of free text answers in a "black box"…
Descriptors: Computer Assisted Testing, Automation, Test Items, Semantics
Lim, Hwanggyu; Choe, Edison M. – Journal of Educational Measurement, 2023
The residual differential item functioning (RDIF) detection framework was developed recently under a linear testing context. To explore the potential application of this framework to computerized adaptive testing (CAT), the present study investigated the utility of the RDIF[subscript R] statistic both as an index for detecting uniform DIF of…
Descriptors: Test Items, Computer Assisted Testing, Item Response Theory, Adaptive Testing
van der Linden, Wim J. – Journal of Educational and Behavioral Statistics, 2022
Two independent statistical tests of item compromise are presented, one based on the test takers' responses and the other on their response times (RTs) on the same items. The tests can be used to monitor an item in real time during online continuous testing but are also applicable as part of post hoc forensic analysis. The two test statistics are…
Descriptors: Test Items, Item Analysis, Item Response Theory, Computer Assisted Testing
Jila Niknejad; Margaret Bayer – International Journal of Mathematical Education in Science and Technology, 2025
In Spring 2020, the need for redesigning online assessments to preserve integrity became a priority to many educators. Many of us found methods to proctor examinations using Zoom and proctoring software. Such examinations pose their own issues. To reduce the technical difficulties and cost, many Zoom proctored examination sessions were shortened;…
Descriptors: Mathematics Instruction, Mathematics Tests, Computer Assisted Testing, Computer Software
Steedle, Jeffrey T.; Cho, Young Woo; Wang, Shichao; Arthur, Ann M.; Li, Dongmei – Educational Measurement: Issues and Practice, 2022
As testing programs transition from paper to online testing, they must study mode comparability to support the exchangeability of scores from different testing modes. To that end, a series of three mode comparability studies was conducted during the 2019-2020 academic year with examinees randomly assigned to take the ACT college admissions exam on…
Descriptors: College Entrance Examinations, Computer Assisted Testing, Scores, Test Format
OECD Publishing, 2019
Log files from computer-based assessment can help better understand respondents' behaviours and cognitive strategies. Analysis of timing information from Programme for the International Assessment of Adult Competencies (PIAAC) reveals large differences in the time participants take to answer assessment items, as well as large country differences…
Descriptors: Adults, Computer Assisted Testing, Test Items, Reaction Time
Dave Kush; Anne Dahl; Filippa Lindahl – Second Language Research, 2024
Embedded questions (EQs) are islands for filler--gap dependency formation in English, but not in Norwegian. Kush and Dahl (2022) found that first language (L1) Norwegian participants often accepted filler-gap dependencies into EQs in second language (L2) English, and proposed that this reflected persistent transfer from Norwegian of the functional…
Descriptors: Transfer of Training, Norwegian, Native Language, Grammar
Jewsbury, Paul A.; van Rijn, Peter W. – Journal of Educational and Behavioral Statistics, 2020
In large-scale educational assessment data consistent with a simple-structure multidimensional item response theory (MIRT) model, where every item measures only one latent variable, separate unidimensional item response theory (UIRT) models for each latent variable are often calibrated for practical reasons. While this approach can be valid for…
Descriptors: Item Response Theory, Computation, Test Items, Adaptive Testing
Cui, Zhongmin; Liu, Chunyan; He, Yong; Chen, Hanwei – Journal of Educational Measurement, 2018
Allowing item review in computerized adaptive testing (CAT) is getting more attention in the educational measurement field as more and more testing programs adopt CAT. The research literature has shown that allowing item review in an educational test could result in more accurate estimates of examinees' abilities. The practice of item review in…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Test Wiseness
Nazaretsky, Tanya; Hershkovitz, Sara; Alexandron, Giora – International Educational Data Mining Society, 2019
Sequencing items in adaptive learning systems typically relies on a large pool of interactive question items that are analyzed into a hierarchy of skills, also known as Knowledge Components (KCs). Educational data mining techniques can be used to analyze students response data in order to optimize the mapping of items to KCs, with similarity-based…
Descriptors: Intelligent Tutoring Systems, Item Response Theory, Measurement, Testing
Gorbett, Luke J.; Chapamn, Kayla E.; Liberatore, Matthew W. – Advances in Engineering Education, 2022
Spreadsheets are a core computational tool for practicing engineers and engineering students. While Microsoft Excel, Google Sheets, and other spreadsheet tools have some differences, numerous formulas, functions, and other tasks are common across versions and platforms. Building upon learning science frameworks showing that interactive activities…
Descriptors: Spreadsheets, Computer Software, Engineering Education, Textbooks
Cappaert, Kevin J.; Wen, Yao; Chang, Yu-Feng – Measurement: Interdisciplinary Research and Perspectives, 2018
Events such as curriculum changes or practice effects can lead to item parameter drift (IPD) in computer adaptive testing (CAT). The current investigation introduced a point- and weight-adjusted D[superscript 2] method for IPD detection for use in a CAT environment when items are suspected of drifting across test administrations. Type I error and…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Items, Identification
Rivas, Axel; Scasso, Martín Guillermo – Journal of Education Policy, 2021
Since 2000, the PISA test implemented by OECD has become the prime benchmark for international comparisons in education. The 2015 PISA edition introduced methodological changes that altered the nature of its results. PISA made no longer valid non-reached items of the final part of the test, assuming that those unanswered questions were more a…
Descriptors: Test Validity, Computer Assisted Testing, Foreign Countries, Achievement Tests