Publication Date
In 2025 | 1 |
Since 2024 | 7 |
Since 2021 (last 5 years) | 23 |
Since 2016 (last 10 years) | 43 |
Since 2006 (last 20 years) | 97 |
Descriptor
Source
Author
Aryadoust, Vahid | 2 |
Atwood, Charles H. | 2 |
Conejo, Ricardo | 2 |
Ferrando, Pere J. | 2 |
Kalender, Ilker | 2 |
Liu, Ou Lydia | 2 |
Royal, Kenneth D. | 2 |
Wang, Jing | 2 |
Abbakumov, Dmitry | 1 |
Agus Santoso | 1 |
Ahmed Al - Badri | 1 |
More ▼ |
Publication Type
Education Level
Audience
Practitioners | 1 |
Researchers | 1 |
Students | 1 |
Location
Turkey | 5 |
China | 4 |
Australia | 3 |
Germany | 3 |
Indonesia | 3 |
Singapore | 3 |
Turkey (Ankara) | 3 |
Georgia | 2 |
Iran | 2 |
Malaysia | 2 |
Minnesota | 2 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Ghio, Fernanda Belén; Bruzzone, Manuel; Rojas-Torres, Luis; Cupani, Marcos – European Journal of Science and Mathematics Education, 2022
In the last decades, the development of computerized adaptive testing (CAT) has allowed more precise measurements with a smaller number of items. In this study, we develop an item bank (IB) to generate the adaptive algorithm and simulate the functioning of CAT to assess the domains of mathematical knowledge in Argentinian university students…
Descriptors: Test Items, Item Banks, Adaptive Testing, Mathematics Tests
Sarah Alahmadi; Christine E. DeMars – Applied Measurement in Education, 2024
Large-scale educational assessments are sometimes considered low-stakes, increasing the possibility of confounding true performance level with low motivation. These concerns are amplified in remote testing conditions. To remove the effects of low effort levels in responses observed in remote low-stakes testing, several motivation filtering methods…
Descriptors: Multiple Choice Tests, Item Response Theory, College Students, Scores
Peter F. Halpin – Society for Research on Educational Effectiveness, 2024
Background: Meta-analyses of educational interventions have consistently documented the importance of methodological factors related to the choice of outcome measures. In particular, when interventions are evaluated using measures developed by researchers involved with the intervention or its evaluation, the effect sizes tend to be larger than…
Descriptors: College Students, College Faculty, STEM Education, Item Response Theory
Simsek, Irfan; Balaban, M. Erdal; Ergin, Hatice – Turkish Online Journal of Educational Technology - TOJET, 2019
With the Expert Examination System developed within the scope of this study, questions can be prepared in accordance with the measurement and evaluation criteria in education, which helps measure the students' actual knowledge levels effectively. It is possible to create exam forms by using these questions. The developed Special Examination System…
Descriptors: Computer Assisted Testing, Artificial Intelligence, Adaptive Testing, Item Response Theory
Diyorjon Abdullaev; Djuraeva Laylo Shukhratovna; Jamoldinova Odinaxon Rasulovna; Jumanazarov Umid Umirzakovich; Olga V. Staroverova – International Journal of Language Testing, 2024
Local item dependence (LID) refers to the situation where responses to items in a test or questionnaire are influenced by responses to other items in the test. This could be due to shared prompts, item content similarity, and deficiencies in item construction. LID due to a shared prompt is highly probable in cloze tests where items are nested…
Descriptors: Undergraduate Students, Foreign Countries, English (Second Language), Second Language Learning
Yasuda, Jun-ichiro; Mae, Naohiro; Hull, Michael M.; Taniguchi, Masa-aki – Physical Review Physics Education Research, 2021
As a method to shorten the test time of the Force Concept Inventory (FCI), we suggest the use of computerized adaptive testing (CAT). CAT is the process of administering a test on a computer, with items (i.e., questions) selected based upon the responses of the examinee to prior items. In so doing, the test length can be significantly shortened.…
Descriptors: Foreign Countries, College Students, Student Evaluation, Computer Assisted Testing
Camenares, Devin – International Journal for the Scholarship of Teaching and Learning, 2022
Balancing assessment of learning outcomes with the expectations of students is a perennial challenge in education. Difficult exams, in which many students perform poorly, exacerbate this problem and can inspire a wide variety of interventions, such as a grading curve. However, addressing poor performance can sometimes distort or inflate grades and…
Descriptors: College Students, Student Evaluation, Tests, Test Items
Agus Santoso; Heri Retnawati; Timbul Pardede; Ibnu Rafi; Munaya Nikma Rosyada; Gulzhaina K. Kassymova; Xu Wenxin – Practical Assessment, Research & Evaluation, 2024
The test blueprint is important in test development, where it guides the test item writer in creating test items according to the desired objectives and specifications or characteristics (so-called a priori item characteristics), such as the level of item difficulty in the category and the distribution of items based on their difficulty level.…
Descriptors: Foreign Countries, Undergraduate Students, Business English, Test Construction
Sahin, Murat Dogan; Gelbal, Selahattin – International Journal of Assessment Tools in Education, 2020
The purpose of this study was to conduct a real-time multidimensional computerized adaptive test (MCAT) using data from a previous paper-pencil test (PPT) regarding the grammar and vocabulary dimensions of an end-of-term proficiency exam conducted on students in a preparatory class at a university. An item pool was established through four…
Descriptors: Adaptive Testing, Computer Assisted Testing, Language Tests, Language Proficiency
Che Lah, Noor Hidayah; Tasir, Zaidatun; Jumaat, Nurul Farhana – Educational Studies, 2023
The aim of the study was to evaluate the extended version of the Problem-Solving Inventory (PSI) via an online learning setting known as the Online Problem-Solving Inventory (OPSI) through the lens of Rasch Model analysis. To date, there is no extended version of the PSI for online settings even though many researchers have used it; thus, this…
Descriptors: Problem Solving, Measures (Individuals), Electronic Learning, Item Response Theory
Mimi Ismail; Ahmed Al - Badri; Said Al - Senaidi – Journal of Education and e-Learning Research, 2025
This study aimed to reveal the differences in individuals' abilities, their standard errors, and the psychometric properties of the test according to the two methods of applying the test (electronic and paper). The descriptive approach was used to achieve the study's objectives. The study sample consisted of 74 male and female students at the…
Descriptors: Achievement Tests, Computer Assisted Testing, Psychometrics, Item Response Theory
Angelica Garzon Umerenkova; Jesus de la Fuente Arias – Electronic Journal of Research in Educational Psychology, 2024
Introduction: Self-regulation is the ability to adequately plan and manage one's own behavior in a flexible manner. It is a predictor of well-being, health, academic performance, among others. The psychometric characterization of the Self-Regulation Questionnaire-Abbreviated (CAR-abr.) composed of 17 items is presented. A versatile instrument,…
Descriptors: Self Control, Self Management, Questionnaires, Psychometrics
Kosan, Aysen Melek Aytug; Koç, Nizamettin; Elhan, Atilla Halil; Öztuna, Derya – International Journal of Assessment Tools in Education, 2019
Progress Test (PT) is a form of assessment that simultaneously measures ability levels of all students in a certain educational program and their progress over time by providing them with same questions and repeating the process at regular intervals with parallel tests. Our objective was to generate an item bank for the PT and to examine the…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Medical Education
Liu, Kai; Zhang, Longfei; Tu, Dongbo; Cai, Yan – SAGE Open, 2022
We aimed to develop an item bank of computerized adaptive testing for eating disorders (CAT-ED) in Chinese university students to increase measurement precision and improve test efficiency. A total of 1,025 Chinese undergraduate respondents answered a series of questions about eating disorders in a paper-pencil test. A total of 133 items from four…
Descriptors: Item Analysis, Eating Disorders, Computer Assisted Testing, Goodness of Fit
Gorney, Kylie; Wollack, James A. – Practical Assessment, Research & Evaluation, 2022
Unlike the traditional multiple-choice (MC) format, the discrete-option multiple-choice (DOMC) format does not necessarily reveal all answer options to an examinee. The purpose of this study was to determine whether the reduced exposure of item content affects test security. We conducted an experiment in which participants were allowed to view…
Descriptors: Test Items, Test Format, Multiple Choice Tests, Item Analysis