Publication Date
In 2025 | 2 |
Since 2024 | 3 |
Since 2021 (last 5 years) | 13 |
Since 2016 (last 10 years) | 27 |
Since 2006 (last 20 years) | 44 |
Descriptor
Computer Assisted Testing | 69 |
Test Items | 69 |
Testing | 69 |
Test Construction | 20 |
Adaptive Testing | 16 |
Difficulty Level | 16 |
Test Format | 16 |
Comparative Analysis | 15 |
Item Analysis | 14 |
Language Tests | 13 |
Item Response Theory | 11 |
More ▼ |
Source
Author
Kim, Do-Hong | 2 |
Kuneshka, Loreta | 2 |
Teneqexhi, Romeo | 2 |
Weiss, David J. | 2 |
Abayeva, Nella F. | 1 |
Ackerman, Debra J. | 1 |
Akbay, Lokman | 1 |
Akbay, Tuncer | 1 |
Alexandron, Giora | 1 |
Ali, Usama | 1 |
Becker, Kirk A. | 1 |
More ▼ |
Publication Type
Education Level
Audience
Practitioners | 4 |
Researchers | 3 |
Teachers | 2 |
Students | 1 |
Location
Albania | 2 |
Turkey | 2 |
Delaware | 1 |
Florida | 1 |
Germany | 1 |
Illinois | 1 |
Maryland | 1 |
New Hampshire | 1 |
North Carolina | 1 |
Ohio | 1 |
Oregon | 1 |
More ▼ |
Laws, Policies, & Programs
Americans with Disabilities… | 1 |
Assessments and Surveys
National Assessment of… | 3 |
Test of English as a Foreign… | 2 |
ACT Assessment | 1 |
Advanced Placement… | 1 |
Graduate Record Examinations | 1 |
National Teacher Examinations | 1 |
Program for International… | 1 |
What Works Clearinghouse Rating
Ye Ma; Deborah J. Harris – Educational Measurement: Issues and Practice, 2025
Item position effect (IPE) refers to situations where an item performs differently when it is administered in different positions on a test. The majority of previous research studies have focused on investigating IPE under linear testing. There is a lack of IPE research under adaptive testing. In addition, the existence of IPE might violate Item…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Test Items
Lae Lae Shwe; Sureena Matayong; Suntorn Witosurapot – Education and Information Technologies, 2024
Multiple Choice Questions (MCQs) are an important evaluation technique for both examinations and learning activities. However, the manual creation of questions is time-consuming and challenging for teachers. Hence, there is a notable demand for an Automatic Question Generation (AQG) system. Several systems have been created for this aim, but the…
Descriptors: Difficulty Level, Computer Assisted Testing, Adaptive Testing, Multiple Choice Tests
Jila Niknejad; Margaret Bayer – International Journal of Mathematical Education in Science and Technology, 2025
In Spring 2020, the need for redesigning online assessments to preserve integrity became a priority to many educators. Many of us found methods to proctor examinations using Zoom and proctoring software. Such examinations pose their own issues. To reduce the technical difficulties and cost, many Zoom proctored examination sessions were shortened;…
Descriptors: Mathematics Instruction, Mathematics Tests, Computer Assisted Testing, Computer Software
Elkhatat, Ahmed M. – International Journal for Educational Integrity, 2022
Examinations form part of the assessment processes that constitute the basis for benchmarking individual educational progress, and must consequently fulfill credibility, reliability, and transparency standards in order to promote learning outcomes and ensure academic integrity. A randomly selected question examination (RSQE) is considered to be an…
Descriptors: Integrity, Monte Carlo Methods, Credibility, Reliability
Ozsoy, Seyma Nur; Kilmen, Sevilay – International Journal of Assessment Tools in Education, 2023
In this study, Kernel test equating methods were compared under NEAT and NEC designs. In NEAT design, Kernel post-stratification and chain equating methods taking into account optimal and large bandwidths were compared. In the NEC design, gender and/or computer/tablet use was considered as a covariate, and Kernel test equating methods were…
Descriptors: Equated Scores, Testing, Test Items, Statistical Analysis
Olsho, Alexis; Smith, Trevor I.; Eaton, Philip; Zimmerman, Charlotte; Boudreaux, Andrew; White Brahmia, Suzanne – Physical Review Physics Education Research, 2023
We developed the Physics Inventory of Quantitative Literacy (PIQL) to assess students' quantitative reasoning in introductory physics contexts. The PIQL includes several "multiple-choice-multipleresponse" (MCMR) items (i.e., multiple-choice questions for which more than one response may be selected) as well as traditional single-response…
Descriptors: Multiple Choice Tests, Science Tests, Physics, Measures (Individuals)
Ted M. Clark; Daniel A. Turner; Darian C. Rostam – Journal of Chemical Education, 2022
Administering exams in large enrollment courses is challenging and systems in place for accomplishing this task were upended in the spring of 2020 when a sudden transformation to online instruction and testing occurred due to the COVID-19 pandemic. In the following year, when courses remained online, approaches to improve exam security included…
Descriptors: Chemistry, Science Instruction, Supervision, Computer Assisted Testing
Susanti, Yuni; Tokunaga, Takenobu; Nishikawa, Hitoshi – Research and Practice in Technology Enhanced Learning, 2020
The present study focuses on the integration of an automatic question generation (AQG) system and a computerised adaptive test (CAT). We conducted two experiments. In the first experiment, we administered sets of questions to English learners to gather their responses. We further used their responses in the second experiment, which is a…
Descriptors: Computer Assisted Testing, Test Items, Simulation, English Language Learners
Patel, Nirmal; Sharma, Aditya; Shah, Tirth; Lomas, Derek – Journal of Educational Data Mining, 2021
Process Analysis is an emerging approach to discover meaningful knowledge from temporal educational data. The study presented in this paper shows how we used Process Analysis methods on the National Assessment of Educational Progress (NAEP) test data for modeling and predicting student test-taking behavior. Our process-oriented data exploration…
Descriptors: Learning Analytics, National Competency Tests, Evaluation Methods, Prediction
Perkins, Beth A.; Satkus, Paulius; Finney, Sara J. – Journal of Psychoeducational Assessment, 2020
Few studies have examined the psychometric properties of the test-related items from the Achievement Emotions Questionnaire (AEQ). Using a sample of 955 university students, we examined the factor structure of 12 emotion items measuring test-related anger, boredom, enjoyment, and pride. Results indicated the four emotions were distinct, allowing…
Descriptors: Affective Measures, Questionnaires, Psychometrics, Test Items
Nazaretsky, Tanya; Hershkovitz, Sara; Alexandron, Giora – International Educational Data Mining Society, 2019
Sequencing items in adaptive learning systems typically relies on a large pool of interactive question items that are analyzed into a hierarchy of skills, also known as Knowledge Components (KCs). Educational data mining techniques can be used to analyze students response data in order to optimize the mapping of items to KCs, with similarity-based…
Descriptors: Intelligent Tutoring Systems, Item Response Theory, Measurement, Testing
Lynch, Sarah – Practical Assessment, Research & Evaluation, 2022
In today's digital age, tests are increasingly being delivered on computers. Many of these computer-based tests (CBTs) have been adapted from paper-based tests (PBTs). However, this change in mode of test administration has the potential to introduce construct-irrelevant variance, affecting the validity of score interpretations. Because of this,…
Descriptors: Computer Assisted Testing, Tests, Scores, Scoring
Nixi Wang – ProQuest LLC, 2022
Measurement errors attributable to cultural issues are complex and challenging for educational assessments. We need assessment tests sensitive to the cultural heterogeneity of populations, and psychometric methods appropriate to address fairness and equity concerns. Built on the research of culturally responsive assessment, this dissertation…
Descriptors: Culturally Relevant Education, Testing, Equal Education, Validity
Lin, Ye – ProQuest LLC, 2018
With the widespread use of technology in the assessment field, many testing programs use both computer-based tests (CBTs) and paper-and-pencil tests (PPTs). Both the Standards for Educational and Psychological Testing (AERA, APA, & NCME, 2014) and the International Guidelines on Computer-Based and Internet Delivered Testing (International Test…
Descriptors: Computer Assisted Testing, Testing, Student Evaluation, Elementary School Students
Wang, Shichao; Li, Dongmei; Steedle, Jeffrey – ACT, Inc., 2021
Speeded tests set time limits so that few examinees can reach all items, and power tests allow most test-takers sufficient time to attempt all items. Educational achievement tests are sometimes described as "timed power tests" because the amount of time provided is intended to allow nearly all students to complete the test, yet this…
Descriptors: Timed Tests, Test Items, Achievement Tests, Testing