NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Practitioners1
Location
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 32 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L.; Soland, James; Dupray, Laurence M. – Journal of Applied Testing Technology, 2021
Technology-Enhanced Items (TEIs) have been purported to be more motivating and engaging to test takers than traditional multiple-choice items. The claim of enhanced engagement, however, has thus far received limited research attention. This study examined the rates of rapid-guessing behavior received by three types of items (multiple-choice,…
Descriptors: Test Items, Guessing (Tests), Multiple Choice Tests, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L.; Soland,, James; Bo, Yuanchao – International Journal of Testing, 2020
Disengaged test taking tends to be most prevalent with low-stakes tests. This has led to questions about the validity of aggregated scores from large-scale international assessments such as PISA and TIMSS, as previous research has found a meaningful correlation between the mean engagement and mean performance of countries. The current study, using…
Descriptors: Foreign Countries, International Assessment, Achievement Tests, Secondary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L. – Education Inquiry, 2019
A decision of whether to move from paper-and-pencil to computer-based tests is based largely on a careful weighing of the potential benefits of a change against its costs, disadvantages, and challenges. This paper briefly discusses the trade-offs involved in making such a transition, and then focuses on a relatively unexplored benefit of…
Descriptors: Computer Assisted Testing, Cheating, Test Wiseness, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L.; Kuhfeld, Megan R.; Cronin, John – Educational Assessment, 2022
The arrival of the COVID-19 pandemic had a profound effect on K-12 education. Most schools transitioned to remote instruction, and some used remote testing to assess student learning. Remote testing, however, is less controlled than in-school testing, leading to concerns regarding test-taking engagement. This study compared the disengagement of…
Descriptors: Computer Assisted Testing, COVID-19, Pandemics, Learner Engagement
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L.; Kuhfeld, Megan R.; Soland, James – Applied Measurement in Education, 2019
When we administer educational achievement tests, we want to be confident that the resulting scores validly indicate what the test takers know and can do. However, if the test is perceived as low stakes by the test taker, disengaged test taking sometimes occurs, which poses a serious threat to score validity. When computer-based tests are used,…
Descriptors: Guessing (Tests), Computer Assisted Testing, Achievement Tests, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L.; Gao, Lingyun – Applied Measurement in Education, 2017
There has been an increased interest in the impact of unmotivated test taking on test performance and score validity. This has led to the development of new ways of measuring test-taking effort based on item response time. In particular, Response Time Effort (RTE) has been shown to provide an assessment of effort down to the level of individual…
Descriptors: Test Bias, Computer Assisted Testing, Item Response Theory, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L.; Kingsbury, G. Gage; Webb, Norman L. – Educational Measurement: Issues and Practice, 2015
The alignment between a test and the content domain it measures represents key evidence for the validation of test score inferences. Although procedures have been developed for evaluating the content alignment of linear tests, these procedures are not readily applicable to computerized adaptive tests (CATs), which require large item pools and do…
Descriptors: Computer Assisted Testing, Adaptive Testing, Alignment (Education), Test Content
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L. – Educational Measurement: Issues and Practice, 2017
The rise of computer-based testing has brought with it the capability to measure more aspects of a test event than simply the answers selected or constructed by the test taker. One behavior that has drawn much research interest is the time test takers spend responding to individual multiple-choice items. In particular, very short response…
Descriptors: Guessing (Tests), Multiple Choice Tests, Test Items, Reaction Time
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L. – Measurement: Interdisciplinary Research and Perspectives, 2015
The growing presence of computer-based testing has brought with it the capability to routinely capture the time that test takers spend on individual test items. This, in turn, has led to an increased interest in potential applications of response time in measuring intellectual ability and achievement. Goldhammer (this issue) provides a very useful…
Descriptors: Reaction Time, Measurement, Computer Assisted Testing, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L.; Pastor, Dena A.; Kong, Xiaojing J. – Applied Measurement in Education, 2009
Previous research has shown that rapid-guessing behavior can degrade the validity of test scores from low-stakes proficiency tests. This study examined, using hierarchical generalized linear modeling, examinee and item characteristics for predicting rapid-guessing behavior. Several item characteristics were found significant; items with more text…
Descriptors: Guessing (Tests), Achievement Tests, Correlation, Test Items
Wise, Steven L. – 1999
Outside of large-scale testing programs, the computerized adaptive test (CAT) has thus far had only limited impact on measurement practice. In smaller-scale testing contexts, limited data are often available, which precludes the establishment of calibrated item pools for use by traditional (i.e., item response theory (IRT) based) CATs. This paper…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Response Theory, Scores
Peer reviewed Peer reviewed
Wise, Steven L.; And Others – Applied Measurement in Education, 1994
The hypothesis that previously found effects of self-adapted testing (SAT) are attributable to examinees' having an increased perception of control over a stressful testing situation was studied with 377 college students who took computerized adaptive tests or SAT. The strongest preference for SAT was seen in individuals with the highest…
Descriptors: Adaptive Testing, College Students, Computer Assisted Testing, Higher Education
Wise, Steven L.; Kong, Xiaojing – Online Submission, 2005
When low-stakes assessments are administered to examinees, the degree to which examinees give their best effort is often unclear, complicating the validity and interpretation of the resulting test scores. This study introduces a new method for measuring examinee test-taking effort on computer-based test items based on item response time. This…
Descriptors: Computer Assisted Testing, Reaction Time, Response Style (Tests), Measurement Techniques
Peer reviewed Peer reviewed
Wise, Steven L.; Finney, Sara J.; Enders, Craig K.; Freeman, Sharon A.; Severance, Donald D. – Applied Measurement in Education, 1999
Examined whether providing item review on a computerized adaptive test could be used by examinees to inflate their scores. Two studies involving 139 undergraduates suggest that examinees are not highly proficient at discriminating item difficulty. A simulation study showed the usefulness of a strategy identified by G. Kingsbury (1996) as a way to…
Descriptors: Adaptive Testing, Computer Assisted Testing, Difficulty Level, Higher Education
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L.; Kong, Xiaojing – Applied Measurement in Education, 2005
When low-stakes assessments are administered, the degree to which examinees give their best effort is often unclear, complicating the validity and interpretation of the resulting test scores. This study introduces a new method, based on item response time, for measuring examinee test-taking effort on computer-based test items. This measure, termed…
Descriptors: Psychometrics, Validity, Reaction Time, Test Items
Previous Page | Next Page ยป
Pages: 1  |  2  |  3