Publication Date
In 2025 | 1 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 8 |
Since 2006 (last 20 years) | 11 |
Descriptor
Foreign Countries | 14 |
Test Items | 14 |
Test Length | 14 |
Accuracy | 6 |
Computer Assisted Testing | 5 |
Adaptive Testing | 4 |
Computation | 4 |
Item Response Theory | 4 |
Measurement | 4 |
Error of Measurement | 3 |
Multiple Choice Tests | 3 |
More ▼ |
Source
Author
Bae, Minryoung | 1 |
Budescu, David V. | 1 |
Bulut, Okan | 1 |
Catts, Ralph | 1 |
Dogan, Nuri | 1 |
Eggen, T.J.H.M. | 1 |
Gawliczek, Piotr | 1 |
Hasibe Yahsi Sari | 1 |
Hull, Michael M. | 1 |
Hulya Kelecioglu | 1 |
Kan, Adnan | 1 |
More ▼ |
Publication Type
Journal Articles | 11 |
Reports - Research | 11 |
Reports - Evaluative | 2 |
Dissertations/Theses -… | 1 |
Numerical/Quantitative Data | 1 |
Education Level
Higher Education | 4 |
Postsecondary Education | 4 |
Elementary Secondary Education | 2 |
Secondary Education | 2 |
Early Childhood Education | 1 |
Elementary Education | 1 |
Grade 3 | 1 |
Primary Education | 1 |
Audience
Location
Turkey | 2 |
Asia | 1 |
Australia | 1 |
Germany | 1 |
Iran | 1 |
Israel | 1 |
Japan | 1 |
Netherlands | 1 |
South Korea | 1 |
Taiwan | 1 |
Ukraine | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 2 |
Trends in International… | 2 |
Force Concept Inventory | 1 |
What Works Clearinghouse Rating
Hasibe Yahsi Sari; Hulya Kelecioglu – International Journal of Assessment Tools in Education, 2025
The aim of the study is to examine the effect of polytomous item ratio on ability estimation in different conditions in multistage tests (MST) using mixed tests. The study is simulation-based research. In the PISA 2018 application, the ability parameters of the individuals and the item pool were created by using the item parameters estimated from…
Descriptors: Test Items, Test Format, Accuracy, Test Length
Nikola Ebenbeck; Markus Gebhardt – Journal of Special Education Technology, 2024
Technologies that enable individualization for students have significant potential in special education. Computerized Adaptive Testing (CAT) refers to digital assessments that automatically adjust their difficulty level based on students' abilities, allowing for personalized, efficient, and accurate measurement. This article examines whether CAT…
Descriptors: Computer Assisted Testing, Students with Disabilities, Special Education, Grade 3
Yasuda, Jun-ichiro; Hull, Michael M.; Mae, Naohiro – Physical Review Physics Education Research, 2022
This paper presents improvements made to a computerized adaptive testing (CAT)-based version of the FCI (FCI-CAT) in regards to test security and test efficiency. First, we will discuss measures to enhance test security by controlling for item overexposure, decreasing the risk that respondents may (i) memorize the content of a pretest for use on…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Items, Risk Management
Kilic, Abdullah Faruk; Dogan, Nuri – International Journal of Assessment Tools in Education, 2021
Weighted least squares (WLS), weighted least squares mean-and-variance-adjusted (WLSMV), unweighted least squares mean-and-variance-adjusted (ULSMV), maximum likelihood (ML), robust maximum likelihood (MLR) and Bayesian estimation methods were compared in mixed item response type data via Monte Carlo simulation. The percentage of polytomous items,…
Descriptors: Factor Analysis, Computation, Least Squares Statistics, Maximum Likelihood Statistics
Gawliczek, Piotr; Krykun, Viktoriia; Tarasenko, Nataliya; Tyshchenko, Maksym; Shapran, Oleksandr – Advanced Education, 2021
The article deals with the innovative, cutting age solution within the language testing realm, namely computer adaptive language testing (CALT) in accordance with the NATO Standardization Agreement 6001 (NATO STANAG 6001) requirements for further implementation in foreign language training of personnel of the Armed Forces of Ukraine (AF of…
Descriptors: Computer Assisted Testing, Adaptive Testing, Language Tests, Second Language Instruction
Vaheoja, Monika; Verhelst, N. D.; Eggen, T.J.H.M. – European Journal of Science and Mathematics Education, 2019
In this article, the authors applied profile analysis to Maths exam data to demonstrate how different exam forms, differing in difficulty and length, can be reported and easily interpreted. The results were presented for different groups of participants and for different institutions in different Maths domains by evaluating the balance. Some…
Descriptors: Feedback (Response), Foreign Countries, Statistical Analysis, Scores
Lee, HyeSun – Applied Measurement in Education, 2018
The current simulation study examined the effects of Item Parameter Drift (IPD) occurring in a short scale on parameter estimates in multilevel models where scores from a scale were employed as a time-varying predictor to account for outcome scores. Five factors, including three decisions about IPD, were considered for simulation conditions. It…
Descriptors: Test Items, Hierarchical Linear Modeling, Predictor Variables, Scores
Bae, Minryoung; Lee, Byungmin – English Teaching, 2018
This study examines the effects of text length and question type on Korean EFL readers' reading comprehension of the fill-in-the-blank items in Korean CSAT. A total of 100 Korean EFL college students participated in the study. After divided into three different proficiency groups, the participants took a reading comprehension test which consisted…
Descriptors: Test Items, Language Tests, Second Language Learning, Second Language Instruction
Qian, Hong – ProQuest LLC, 2013
This dissertation includes three essays: one essay focuses on the effect of teacher preparation programs on teacher knowledge while the other two focus on test-takers' response times on test items. Essay One addresses the problem of how opportunities to learn in teacher preparation programs influence future elementary mathematics teachers'…
Descriptors: Teacher Education Programs, Pedagogical Content Knowledge, Preservice Teacher Education, Preservice Teachers
Bulut, Okan; Kan, Adnan – Eurasian Journal of Educational Research, 2012
Problem Statement: Computerized adaptive testing (CAT) is a sophisticated and efficient way of delivering examinations. In CAT, items for each examinee are selected from an item bank based on the examinee's responses to the items. In this way, the difficulty level of the test is adjusted based on the examinee's ability level. Instead of…
Descriptors: Adaptive Testing, Computer Assisted Testing, College Entrance Examinations, Graduate Students
Wu, Margaret – OECD Publishing (NJ1), 2010
This paper makes an in-depth comparison of the PISA (OECD) and TIMSS (IEA) mathematics assessments conducted in 2003. First, a comparison of survey methodologies is presented, followed by an examination of the mathematics frameworks in the two studies. The methodologies and the frameworks in the two studies form the basis for providing…
Descriptors: Mathematics Achievement, Foreign Countries, Gender Differences, Comparative Analysis
Wang, Wen-Chung; Su, Ya-Hui – Applied Psychological Measurement, 2004
Eight independent variables (differential item functioning [DIF] detection method, purification procedure, item response model, mean latent trait difference between groups, test length, DIF pattern, magnitude of DIF, and percentage of DIF items) were manipulated, and two dependent variables (Type I error and power) were assessed through…
Descriptors: Test Length, Test Bias, Simulation, Item Response Theory
Catts, Ralph – 1978
The reliability of multiple choice tests--containing different numbers of response options--was investigated for 260 students enrolled in technical college economics courses. Four test forms, constructed from previously used four-option items, were administered, consisting of (1) 60 two-option items--two distractors randomly discarded; (2) 40…
Descriptors: Answer Sheets, Difficulty Level, Foreign Countries, Higher Education

Budescu, David V.; Nevo, Baruch – Journal of Educational Measurement, 1985
The proportionality model assumes that total testing time is proportional to the number of test items and the number of options per multiple choice test item. This assumption was examined, using test items having from two to five options. The model was not supported. (Author/GDC)
Descriptors: College Entrance Examinations, Foreign Countries, Higher Education, Item Analysis