NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 6 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Wolkowitz, Amanda A.; Foley, Brett P.; Zurn, Jared – Journal of Applied Testing Technology, 2021
As assessments move from traditional paper-pencil administration to computer-based administration, many testing programs are incorporating alternative item types (AITs) into assessments with the goals of measuring higher-order thinking, offering insight into problem-solving, and representing authentic real-world tasks. This paper explores multiple…
Descriptors: Psychometrics, Alternative Assessment, Computer Assisted Testing, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Makransky, Guido; Glas, Cees A. W. – Journal of Applied Testing Technology, 2010
An accurately calibrated item bank is essential for a valid computerized adaptive test. However, in some settings, such as occupational testing, there is limited access to test takers for calibration. As a result of the limited access to possible test takers, collecting data to accurately calibrate an item bank in an occupational setting is…
Descriptors: Foreign Countries, Simulation, Adaptive Testing, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Laitusis, Cara Cahalan; Maneckshana, Behroz; Monfils, Lora; Ahlgrim-Delzell, Lynn – Journal of Applied Testing Technology, 2009
The purpose of this study was to examine Differential Item Functioning (DIF) by disability groups on an on-demand performance assessment for students with severe cognitive impairments. Researchers examined the presence of DIF for two comparisons. One comparison involved students with severe cognitive impairments who served as the reference group…
Descriptors: Test Bias, Test Items, Autism, Performance Based Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Lissitz, Robert W.; Hou, Xiaodong; Slater, Sharon Cadman – Journal of Applied Testing Technology, 2012
This article investigates several questions regarding the impact of different item formats on measurement characteristics. Constructed response (CR) items and multiple choice (MC) items obviously differ in their formats and in the resources needed to score them. As such, they have been the subject of considerable discussion regarding the impact of…
Descriptors: Computer Assisted Testing, Scoring, Evaluation Problems, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Russell, Michael; Famularo, Lisa – Journal of Applied Testing Technology, 2008
Student assessment is an integral component of classroom instruction. Assessment is intended to help teachers identify what students are able to do and what content and skills students must develop further. State tests play an important role in guiding instruction. However, for some students, the tests may lead to inaccurate conclusions about…
Descriptors: Student Evaluation, Evaluation Research, Questionnaires, Mail Surveys
Peer reviewed Peer reviewed
Direct linkDirect link
Moen, Ross; Liu, Kristi; Thurlow, Martha; Lekwa, Adam; Scullin, Sarah; Hausmann, Kristin – Journal of Applied Testing Technology, 2009
Some students are less accurately measured by typical reading tests than other students. By asking teachers to identify students whose performance on state reading tests would likely underestimate their reading skills, this study sought to learn about characteristics of less accurately measured students while also evaluating how well teachers can…
Descriptors: Reading Tests, Academic Achievement, Interviews, Program Effectiveness