NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Shin, Jinnie; Gierl, Mark J. – International Journal of Testing, 2022
Over the last five years, tremendous strides have been made in advancing the AIG methodology required to produce items in diverse content areas. However, the one content area where enormous problems remain unsolved is language arts, generally, and reading comprehension, more specifically. While reading comprehension test items can be created using…
Descriptors: Reading Comprehension, Test Construction, Test Items, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Magraw-Mickelson, Zoe; Wang, Harry H.; Gollwitzer, Mario – International Journal of Testing, 2022
Much psychological research depends on participants' diligence in filling out materials such as surveys. However, not all participants are motivated to respond attentively, which leads to unintended issues with data quality, known as careless responding. Our question is: how do different modes of data collection--paper/pencil, computer/web-based,…
Descriptors: Response Style (Tests), Surveys, Data Collection, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Moon, Jung Aa; Sinharay, Sandip; Keehner, Madeleine; Katz, Irvin R. – International Journal of Testing, 2020
The current study examined the relationship between test-taker cognition and psychometric item properties in multiple-selection multiple-choice and grid items. In a study with content-equivalent mathematics items in alternative item formats, adult participants' tendency to respond to an item was affected by the presence of a grid and variations of…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Test Wiseness, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Talento-Miller, Eileen; Guo, Fanmin; Han, Kyung T. – International Journal of Testing, 2013
When power tests include a time limit, it is important to assess the possibility of speededness for examinees. Past research on differential speededness has examined gender and ethnic subgroups in the United States on paper and pencil tests. When considering the needs of a global audience, research regarding different native language speakers is…
Descriptors: Adaptive Testing, Computer Assisted Testing, English, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Papanastasiou, Elena C.; Reckase, Mark D. – International Journal of Testing, 2007
Because of the increased popularity of computerized adaptive testing (CAT), many admissions tests, as well as certification and licensure examinations, have been transformed from their paper-and-pencil versions to computerized adaptive versions. A major difference between paper-and-pencil tests and CAT from an examinee's point of view is that in…
Descriptors: Simulation, Adaptive Testing, Computer Assisted Testing, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Foxcroft, Cheryl D.; Davies, Caroline – International Journal of Testing, 2006
The increased use of computer-based and Internet-delivered testing has raised a number of ethical and legal issues. The International Test Commission's (this issue) Guidelines for Computer-Based and Internet-Delivered Testing represent the most recent attempt to provide test users, publishers, and developers with guidance regarding the appropriate…
Descriptors: Ownership, Guidelines, Internet, Testing
Peer reviewed Peer reviewed
Sireci, Stephen G.; Harter, James; Yang, Yongwei; Bhola, Dennison – International Journal of Testing, 2003
Evaluated the structural equivalence and differential item functioning of an employee attitude survey from a large international corporation across three languages, eight cultures, and two mediums of administration. Results for 40,595 employees show the structure of survey data was consistent and items functioned similarly across all groups. (SLD)
Descriptors: Attitude Measures, Computer Assisted Testing, Cross Cultural Studies, Employees