Publication Date
In 2025 | 1 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 4 |
Since 2016 (last 10 years) | 16 |
Since 2006 (last 20 years) | 24 |
Descriptor
Computer Assisted Testing | 26 |
Test Items | 12 |
Adaptive Testing | 11 |
Foreign Countries | 9 |
Simulation | 7 |
Comparative Analysis | 6 |
Item Response Theory | 6 |
Test Format | 6 |
Correlation | 5 |
Psychometrics | 5 |
Scores | 5 |
More ▼ |
Source
International Journal of… | 26 |
Author
Leighton, Jacqueline P. | 2 |
Aksu Dunya, Beyza | 1 |
Balboni, Giulia | 1 |
Bartram, Dave | 1 |
Bass, Michael | 1 |
Beyza Aksu Dunya | 1 |
Bhola, Dennison | 1 |
Bo, Yuanchao | 1 |
Boben, Dusica | 1 |
Breland, Hunter | 1 |
Bridgeman, Brent | 1 |
More ▼ |
Publication Type
Journal Articles | 26 |
Reports - Research | 26 |
Tests/Questionnaires | 2 |
Speeches/Meeting Papers | 1 |
Education Level
Higher Education | 4 |
Secondary Education | 4 |
Elementary Education | 3 |
Middle Schools | 3 |
Grade 8 | 2 |
Junior High Schools | 2 |
Postsecondary Education | 2 |
Early Childhood Education | 1 |
Grade 3 | 1 |
Grade 4 | 1 |
Grade 5 | 1 |
More ▼ |
Audience
Location
Germany | 6 |
China | 3 |
Denmark | 2 |
Poland | 2 |
South Korea | 2 |
Sweden | 2 |
Austria | 1 |
Belgium | 1 |
Brazil | 1 |
Bulgaria | 1 |
Canada | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Graduate Management Admission… | 1 |
National Assessment of… | 1 |
Program for International… | 1 |
What Works Clearinghouse Rating
Beyza Aksu Dunya; Stefanie Wind – International Journal of Testing, 2025
We explored the practicality of relatively small item pools in the context of low-stakes Computer-Adaptive Testing (CAT), such as CAT procedures that might be used for quick diagnostic or screening exams. We used a basic CAT algorithm without content balancing and exposure control restrictions to reflect low stakes testing scenarios. We examined…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Achievement
Morris, Scott B.; Bass, Michael; Howard, Elizabeth; Neapolitan, Richard E. – International Journal of Testing, 2020
The standard error (SE) stopping rule, which terminates a computer adaptive test (CAT) when the "SE" is less than a threshold, is effective when there are informative questions for all trait levels. However, in domains such as patient-reported outcomes, the items in a bank might all target one end of the trait continuum (e.g., negative…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Banks, Item Response Theory
Shin, Jinnie; Gierl, Mark J. – International Journal of Testing, 2022
Over the last five years, tremendous strides have been made in advancing the AIG methodology required to produce items in diverse content areas. However, the one content area where enormous problems remain unsolved is language arts, generally, and reading comprehension, more specifically. While reading comprehension test items can be created using…
Descriptors: Reading Comprehension, Test Construction, Test Items, Natural Language Processing
Magraw-Mickelson, Zoe; Wang, Harry H.; Gollwitzer, Mario – International Journal of Testing, 2022
Much psychological research depends on participants' diligence in filling out materials such as surveys. However, not all participants are motivated to respond attentively, which leads to unintended issues with data quality, known as careless responding. Our question is: how do different modes of data collection--paper/pencil, computer/web-based,…
Descriptors: Response Style (Tests), Surveys, Data Collection, Test Format
Wise, Steven L.; Soland,, James; Bo, Yuanchao – International Journal of Testing, 2020
Disengaged test taking tends to be most prevalent with low-stakes tests. This has led to questions about the validity of aggregated scores from large-scale international assessments such as PISA and TIMSS, as previous research has found a meaningful correlation between the mean engagement and mean performance of countries. The current study, using…
Descriptors: Foreign Countries, International Assessment, Achievement Tests, Secondary School Students
Moon, Jung Aa; Sinharay, Sandip; Keehner, Madeleine; Katz, Irvin R. – International Journal of Testing, 2020
The current study examined the relationship between test-taker cognition and psychometric item properties in multiple-selection multiple-choice and grid items. In a study with content-equivalent mathematics items in alternative item formats, adult participants' tendency to respond to an item was affected by the presence of a grid and variations of…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Test Wiseness, Psychometrics
Luo, Xiao; Wang, Xinrui – International Journal of Testing, 2019
This study introduced dynamic multistage testing (dy-MST) as an improvement to existing adaptive testing methods. dy-MST combines the advantages of computerized adaptive testing (CAT) and computerized adaptive multistage testing (ca-MST) to create a highly efficient and regulated adaptive testing method. In the test construction phase, multistage…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Construction, Psychometrics
Eckes, Thomas; Jin, Kuan-Yu – International Journal of Testing, 2021
Severity and centrality are two main kinds of rater effects posing threats to the validity and fairness of performance assessments. Adopting Jin and Wang's (2018) extended facets modeling approach, we separately estimated the magnitude of rater severity and centrality effects in the web-based TestDaF (Test of German as a Foreign Language) writing…
Descriptors: Language Tests, German, Second Languages, Writing Tests
Cui, Ying; Guo, Qi; Leighton, Jacqueline P.; Chu, Man-Wai – International Journal of Testing, 2020
This study explores the use of the Adaptive Neuro-Fuzzy Inference System (ANFIS), a neuro-fuzzy approach, to analyze the log data of technology-based assessments to extract relevant features of student problem-solving processes, and develop and refine a set of fuzzy logic rules that could be used to interpret student performance. The log data that…
Descriptors: Inferences, Artificial Intelligence, Data Analysis, Computer Assisted Testing
Aksu Dunya, Beyza – International Journal of Testing, 2018
This study was conducted to analyze potential item parameter drift (IPD) impact on person ability estimates and classification accuracy when drift affects an examinee subgroup. Using a series of simulations, three factors were manipulated: (a) percentage of IPD items in the CAT exam, (b) percentage of examinees affected by IPD, and (c) item pool…
Descriptors: Adaptive Testing, Classification, Accuracy, Computer Assisted Testing
Wind, Stefanie A.; Wolfe, Edward W.; Engelhard, George, Jr.; Foltz, Peter; Rosenstein, Mark – International Journal of Testing, 2018
Automated essay scoring engines (AESEs) are becoming increasingly popular as an efficient method for performance assessments in writing, including many language assessments that are used worldwide. Before they can be used operationally, AESEs must be "trained" using machine-learning techniques that incorporate human ratings. However, the…
Descriptors: Computer Assisted Testing, Essay Tests, Writing Evaluation, Scoring
Zlatkin-Troitschanskaia, Olga; Kuhn, Christiane; Brückner, Sebastian; Leighton, Jacqueline P. – International Journal of Testing, 2019
Teaching performance can be assessed validly only if the assessment involves an appropriate, authentic representation of real-life teaching practices. Different skills interact in coordinating teachers' actions in different classroom situations. Based on the evidence-centered design model, we developed a technology-based assessment framework that…
Descriptors: Computer Assisted Testing, Teacher Effectiveness, Teaching Skills, Reflection
Evers, Arne; McCormick, Carina M.; Hawley, Leslie R.; Muñiz, José; Balboni, Giulia; Bartram, Dave; Boben, Dusica; Egeland, Jens; El-Hassan, Karma; Fernández-Hermida, José R.; Fine, Saul; Frans, Örjan; Gintiliené, Grazina; Hagemeister, Carmen; Halama, Peter; Iliescu, Dragos; Jaworowska, Aleksandra; Jiménez, Paul; Manthouli, Marina; Matesic, Krunoslav; Michaelsen, Lars; Mogaji, Andrew; Morley-Kirk, James; Rózsa, Sándor; Rowlands, Lorraine; Schittekatte, Mark; Sümer, H. Canan; Suwartono, Tono; Urbánek, Tomáš; Wechsler, Solange; Zelenevska, Tamara; Zanev, Svetoslav; Zhang, Jianxin – International Journal of Testing, 2017
On behalf of the International Test Commission and the European Federation of Psychologists' Associations a world-wide survey on the opinions of professional psychologists on testing practices was carried out. The main objective of this study was to collect data for a better understanding of the state of psychological testing worldwide. These data…
Descriptors: Testing, Attitudes, Surveys, Psychologists
Lee, Yi-Hsuan; Haberman, Shelby J. – International Journal of Testing, 2016
The use of computer-based assessments makes the collection of detailed data that capture examinees' progress in the tests and time spent on individual actions possible. This article presents a study using process and timing data to aid understanding of an international language assessment and the examinees. Issues regarding test-taking strategies,…
Descriptors: Computer Assisted Testing, Test Wiseness, Language Tests, International Assessment
Wei, Hua; Lin, Jie – International Journal of Testing, 2015
Out-of-level testing refers to the practice of assessing a student with a test that is intended for students at a higher or lower grade level. Although the appropriateness of out-of-level testing for accountability purposes has been questioned by educators and policymakers, incorporating out-of-level items in formative assessments for accurate…
Descriptors: Test Items, Computer Assisted Testing, Adaptive Testing, Instructional Program Divisions
Previous Page | Next Page »
Pages: 1 | 2