NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 48 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Gregory M. Hurtz; Regi Mucino – Journal of Educational Measurement, 2024
The Lognormal Response Time (LNRT) model measures the speed of test-takers relative to the normative time demands of items on a test. The resulting speed parameters and model residuals are often analyzed for evidence of anomalous test-taking behavior associated with fast and poorly fitting response time patterns. Extending this model, we…
Descriptors: Student Reaction, Reaction Time, Response Style (Tests), Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Sijia Huang; Seungwon Chung; Carl F. Falk – Journal of Educational Measurement, 2024
In this study, we introduced a cross-classified multidimensional nominal response model (CC-MNRM) to account for various response styles (RS) in the presence of cross-classified data. The proposed model allows slopes to vary across items and can explore impacts of observed covariates on latent constructs. We applied a recently developed variant of…
Descriptors: Response Style (Tests), Classification, Data, Models
Peer reviewed Peer reviewed
Direct linkDirect link
van der Linden, Wim J.; Belov, Dmitry I. – Journal of Educational Measurement, 2023
A test of item compromise is presented which combines the test takers' responses and response times (RTs) into a statistic defined as the number of correct responses on the item for test takers with RTs flagged as suspicious. The test has null and alternative distributions belonging to the well-known family of compound binomial distributions, is…
Descriptors: Item Response Theory, Reaction Time, Test Items, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Henninger, Mirka – Journal of Educational Measurement, 2021
Item Response Theory models with varying thresholds are essential tools to account for unknown types of response tendencies in rating data. However, in order to separate constructs to be measured and response tendencies, specific constraints have to be imposed on varying thresholds and their interrelations. In this article, a multidimensional…
Descriptors: Response Style (Tests), Item Response Theory, Models, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L.; Kuhfeld, Megan R. – Journal of Educational Measurement, 2021
There has been a growing research interest in the identification and management of disengaged test taking, which poses a validity threat that is particularly prevalent with low-stakes tests. This study investigated effort-moderated (E-M) scoring, in which item responses classified as rapid guesses are identified and excluded from scoring. Using…
Descriptors: Scoring, Data Use, Response Style (Tests), Guessing (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Hong, Maxwell; Rebouças, Daniella A.; Cheng, Ying – Journal of Educational Measurement, 2021
Response time has started to play an increasingly important role in educational and psychological testing, which prompts many response time models to be proposed in recent years. However, response time modeling can be adversely impacted by aberrant response behavior. For example, test speededness can cause response time to certain items to deviate…
Descriptors: Reaction Time, Models, Computation, Robustness (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Okan Bulut; Guher Gorgun; Hacer Karamese – Journal of Educational Measurement, 2025
The use of multistage adaptive testing (MST) has gradually increased in large-scale testing programs as MST achieves a balanced compromise between linear test design and item-level adaptive testing. MST works on the premise that each examinee gives their best effort when attempting the items, and their responses truly reflect what they know or can…
Descriptors: Response Style (Tests), Testing Problems, Testing Accommodations, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Belov, Dmitry I. – Journal of Educational Measurement, 2015
The statistical analysis of answer changes (ACs) has uncovered multiple testing irregularities on large-scale assessments and is now routinely performed at testing organizations. However, AC data has an uncertainty caused by technological or human factors. Therefore, existing statistics (e.g., number of wrong-to-right ACs) used to detect examinees…
Descriptors: Statistical Analysis, Robustness (Statistics), Identification, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Debeer, Dries; Janssen, Rianne; De Boeck, Paul – Journal of Educational Measurement, 2017
When dealing with missing responses, two types of omissions can be discerned: items can be skipped or not reached by the test taker. When the occurrence of these omissions is related to the proficiency process the missingness is nonignorable. The purpose of this article is to present a tree-based IRT framework for modeling responses and omissions…
Descriptors: Item Response Theory, Test Items, Responses, Testing Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Hung-Yu; Wang, Wen-Chung – Journal of Educational Measurement, 2014
The DINA (deterministic input, noisy, and gate) model has been widely used in cognitive diagnosis tests and in the process of test development. The outcomes known as slip and guess are included in the DINA model function representing the responses to the items. This study aimed to extend the DINA model by using the random-effect approach to allow…
Descriptors: Models, Guessing (Tests), Probability, Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Jin, Kuan-Yu; Wang, Wen-Chung – Journal of Educational Measurement, 2014
Sometimes, test-takers may not be able to attempt all items to the best of their ability (with full effort) due to personal factors (e.g., low motivation) or testing conditions (e.g., time limit), resulting in poor performances on certain items, especially those located toward the end of a test. Standard item response theory (IRT) models fail to…
Descriptors: Student Evaluation, Item Response Theory, Models, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Suh, Youngsuk; Cho, Sun-Joo; Wollack, James A. – Journal of Educational Measurement, 2012
In the presence of test speededness, the parameter estimates of item response theory models can be poorly estimated due to conditional dependencies among items, particularly for end-of-test items (i.e., speeded items). This article conducted a systematic comparison of five-item calibration procedures--a two-parameter logistic (2PL) model, a…
Descriptors: Response Style (Tests), Timed Tests, Test Items, Item Response Theory
Peer reviewed Peer reviewed
Masters, James R. – Journal of Educational Measurement, 1974
Descriptors: Attitudes, Questionnaires, Rating Scales, Response Style (Tests)
Peer reviewed Peer reviewed
Sirotnik, Ken; Wellington, Roger J. – Journal of Educational Measurement, 1974
Descriptors: Achievement Tests, Cognitive Tests, Content Analysis, Item Sampling
Peer reviewed Peer reviewed
Tillman, Murray H. – Journal of Educational Measurement, 1974
Two testing packets, Formative Exercises T-TE-15A and T-TE-15B are reviewed. The Exercises are based on Bloom's concept of learning for mastery and are designed to acquaint teachers with the principles of mastery learning and provide examples of formative evaluation. One form of the exercises provides instant feedback to the examinee; the other,…
Descriptors: Feedback, Formative Evaluation, Mastery Tests, Multiple Choice Tests
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4