Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 8 |
Descriptor
Test Format | 9 |
Computer Assisted Testing | 7 |
Test Items | 6 |
Foreign Countries | 3 |
Item Response Theory | 3 |
Adaptive Testing | 2 |
College Students | 2 |
Cross Cultural Studies | 2 |
Difficulty Level | 2 |
Employees | 2 |
Scores | 2 |
More ▼ |
Source
International Journal of… | 9 |
Author
Aryadoust, Vahid | 1 |
Baghaei, Purya | 1 |
Bhola, Dennison | 1 |
Cole, Ki Lynn | 1 |
Davies, Caroline | 1 |
Foxcroft, Cheryl D. | 1 |
Gierl, Mark J. | 1 |
Gollwitzer, Mario | 1 |
Guo, Fanmin | 1 |
Han, Kyung T. | 1 |
Harter, James | 1 |
More ▼ |
Publication Type
Journal Articles | 9 |
Reports - Research | 8 |
Reports - Descriptive | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Higher Education | 4 |
Postsecondary Education | 3 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
International English… | 1 |
What Works Clearinghouse Rating
Shin, Jinnie; Gierl, Mark J. – International Journal of Testing, 2022
Over the last five years, tremendous strides have been made in advancing the AIG methodology required to produce items in diverse content areas. However, the one content area where enormous problems remain unsolved is language arts, generally, and reading comprehension, more specifically. While reading comprehension test items can be created using…
Descriptors: Reading Comprehension, Test Construction, Test Items, Natural Language Processing
Magraw-Mickelson, Zoe; Wang, Harry H.; Gollwitzer, Mario – International Journal of Testing, 2022
Much psychological research depends on participants' diligence in filling out materials such as surveys. However, not all participants are motivated to respond attentively, which leads to unintended issues with data quality, known as careless responding. Our question is: how do different modes of data collection--paper/pencil, computer/web-based,…
Descriptors: Response Style (Tests), Surveys, Data Collection, Test Format
Moon, Jung Aa; Sinharay, Sandip; Keehner, Madeleine; Katz, Irvin R. – International Journal of Testing, 2020
The current study examined the relationship between test-taker cognition and psychometric item properties in multiple-selection multiple-choice and grid items. In a study with content-equivalent mathematics items in alternative item formats, adult participants' tendency to respond to an item was affected by the presence of a grid and variations of…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Test Wiseness, Psychometrics
FIPC Linking across Multidimensional Test Forms: Effects of Confounding Difficulty within Dimensions
Kim, Sohee; Cole, Ki Lynn; Mwavita, Mwarumba – International Journal of Testing, 2018
This study investigated the effects of linking potentially multidimensional test forms using the fixed item parameter calibration. Forms had equal or unequal total test difficulty with and without confounding difficulty. The mean square errors and bias of estimated item and ability parameters were compared across the various confounding tests. The…
Descriptors: Test Items, Item Response Theory, Test Format, Difficulty Level
Talento-Miller, Eileen; Guo, Fanmin; Han, Kyung T. – International Journal of Testing, 2013
When power tests include a time limit, it is important to assess the possibility of speededness for examinees. Past research on differential speededness has examined gender and ethnic subgroups in the United States on paper and pencil tests. When considering the needs of a global audience, research regarding different native language speakers is…
Descriptors: Adaptive Testing, Computer Assisted Testing, English, Scores
Baghaei, Purya; Aryadoust, Vahid – International Journal of Testing, 2015
Research shows that test method can exert a significant impact on test takers' performance and thereby contaminate test scores. We argue that common test method can exert the same effect as common stimuli and violate the conditional independence assumption of item response theory models because, in general, subsets of items which have a shared…
Descriptors: Test Format, Item Response Theory, Models, Test Items
Papanastasiou, Elena C.; Reckase, Mark D. – International Journal of Testing, 2007
Because of the increased popularity of computerized adaptive testing (CAT), many admissions tests, as well as certification and licensure examinations, have been transformed from their paper-and-pencil versions to computerized adaptive versions. A major difference between paper-and-pencil tests and CAT from an examinee's point of view is that in…
Descriptors: Simulation, Adaptive Testing, Computer Assisted Testing, Test Items
Foxcroft, Cheryl D.; Davies, Caroline – International Journal of Testing, 2006
The increased use of computer-based and Internet-delivered testing has raised a number of ethical and legal issues. The International Test Commission's (this issue) Guidelines for Computer-Based and Internet-Delivered Testing represent the most recent attempt to provide test users, publishers, and developers with guidance regarding the appropriate…
Descriptors: Ownership, Guidelines, Internet, Testing

Sireci, Stephen G.; Harter, James; Yang, Yongwei; Bhola, Dennison – International Journal of Testing, 2003
Evaluated the structural equivalence and differential item functioning of an employee attitude survey from a large international corporation across three languages, eight cultures, and two mediums of administration. Results for 40,595 employees show the structure of survey data was consistent and items functioned similarly across all groups. (SLD)
Descriptors: Attitude Measures, Computer Assisted Testing, Cross Cultural Studies, Employees