Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 1 |
| Since 2017 (last 10 years) | 9 |
| Since 2007 (last 20 years) | 17 |
Descriptor
| Reaction Time | 18 |
| Statistical Analysis | 18 |
| Test Items | 18 |
| Item Response Theory | 7 |
| Computer Assisted Testing | 5 |
| Foreign Countries | 5 |
| Correlation | 4 |
| Models | 4 |
| Scores | 4 |
| Cheating | 3 |
| Comparative Analysis | 3 |
| More ▼ | |
Source
Author
| Chang, Hua-Hua | 2 |
| Sinharay, Sandip | 2 |
| Ali, Usama S. | 1 |
| Alpayar, Cagla | 1 |
| Attali, Yigal | 1 |
| Bezirhan, Ummugul | 1 |
| Chambers, Dianne | 1 |
| Courrieu, Pierre | 1 |
| Demirkaya, Onur | 1 |
| Douglas, Jeffrey A. | 1 |
| Fan, Zhewen | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 16 |
| Reports - Research | 16 |
| Reports - Evaluative | 2 |
Education Level
| Higher Education | 3 |
| Postsecondary Education | 3 |
| Middle Schools | 2 |
| Elementary Education | 1 |
| Grade 3 | 1 |
| Grade 8 | 1 |
| Junior High Schools | 1 |
| Secondary Education | 1 |
Audience
Location
| Australia | 2 |
| Ecuador | 1 |
| Hungary | 1 |
| Kazakhstan | 1 |
| Mexico | 1 |
| Netherlands (Amsterdam) | 1 |
| Peru | 1 |
| Turkey (Ankara) | 1 |
| United Kingdom | 1 |
| United States | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| Program for the International… | 1 |
| United States Medical… | 1 |
What Works Clearinghouse Rating
Demirkaya, Onur; Bezirhan, Ummugul; Zhang, Jinming – Journal of Educational and Behavioral Statistics, 2023
Examinees with item preknowledge tend to obtain inflated test scores that undermine test score validity. With the availability of process data collected in computer-based assessments, the research on detecting item preknowledge has progressed on using both item scores and response times. Item revisit patterns of examinees can also be utilized as…
Descriptors: Test Items, Prior Learning, Knowledge Level, Reaction Time
Kuijpers, Renske E.; Visser, Ingmar; Molenaar, Dylan – Journal of Educational and Behavioral Statistics, 2021
Mixture models have been developed to enable detection of within-subject differences in responses and response times to psychometric test items. To enable mixture modeling of both responses and response times, a distributional assumption is needed for the within-state response time distribution. Since violations of the assumed response time…
Descriptors: Test Items, Responses, Reaction Time, Models
Man, Kaiwen; Harring, Jeffrey R. – Educational and Psychological Measurement, 2021
Many approaches have been proposed to jointly analyze item responses and response times to understand behavioral differences between normally and aberrantly behaved test-takers. Biometric information, such as data from eye trackers, can be used to better identify these deviant testing behaviors in addition to more conventional data types. Given…
Descriptors: Cheating, Item Response Theory, Reaction Time, Eye Movements
Sinharay, Sandip – Grantee Submission, 2019
Benefiting from item preknowledge (e.g., McLeod, Lewis, & Thissen, 2003) is a major type of fraudulent behavior during educational assessments. This paper suggests a new statistic that can be used for detecting the examinees who may have benefitted from item preknowledge using their response times. The statistic quantifies the difference in…
Descriptors: Test Items, Cheating, Reaction Time, Identification
Ilgun Dibek, Munevver – International Journal of Educational Methodology, 2021
Response times are one of the important sources that provide information about the performance of individuals during a test process. The main purpose of this study is to show that survival models can be used in educational data. Accordingly, data sets of items measuring literacy, numeracy and problem-solving skills of the countries participating…
Descriptors: Reaction Time, Test Items, Adults, Foreign Countries
Sinharay, Sandip; Johnson, Matthew S. – Grantee Submission, 2019
According to Wollack and Schoenig (2018), benefitting from item preknowledge is one of the three broad types of test fraud that occur in educational assessments. We use tools from constrained statistical inference to suggest a new statistic that is based on item scores and response times and can be used to detect the examinees who may have…
Descriptors: Scores, Test Items, Reaction Time, Cheating
Courrieu, Pierre; Rey, Arnaud – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2015
Recently, Adelman, Marquis, Sabatos-DeVito, and Estes (2013) formulated severe criticisms about approaches based on averaging item response times (RTs) over participants and associated methods for estimating the amount of item variance that models should try to account for. Their main argument was that item effects include stable idiosyncratic…
Descriptors: Reaction Time, Test Items, Statistical Analysis, Validity
Ranger, Jochen; Kuhn, Jörg-Tobias – Journal of Educational and Behavioral Statistics, 2015
In this article, a latent trait model is proposed for the response times in psychological tests. The latent trait model is based on the linear transformation model and subsumes popular models from survival analysis, like the proportional hazards model and the proportional odds model. Core of the model is the assumption that an unspecified monotone…
Descriptors: Psychological Testing, Reaction Time, Statistical Analysis, Models
Roche, Thomas; Harrington, Michael – Journal of Further and Higher Education, 2018
English language programmes provide established pathways for international students seeking university admission in countries such as Australia and the United Kingdom. In order to refer international applicants to appropriate levels and durations of English language support prior to matriculation into their main course of study, pathway providers…
Descriptors: Student Placement, College Admission, College Students, Foreign Students
Alpayar, Cagla; Gulleroglu, H. Deniz – Educational Research and Reviews, 2017
The aim of this research is to determine whether students' test performance and approaches to test questions change based on the type of mathematics questions (visual or verbal) administered to them. This research is based on a mixed-design model. The quantitative data are gathered from 297 seventh grade students, attending seven different middle…
Descriptors: Foreign Countries, Middle School Students, Grade 7, Student Evaluation
Attali, Yigal; Laitusis, Cara; Stone, Elizabeth – Educational and Psychological Measurement, 2016
There are many reasons to believe that open-ended (OE) and multiple-choice (MC) items elicit different cognitive demands of students. However, empirical evidence that supports this view is lacking. In this study, we investigated the reactions of test takers to an interactive assessment with immediate feedback and answer-revision opportunities for…
Descriptors: Test Items, Questioning Techniques, Differences, Student Reaction
Kahraman, Nilüfer – Eurasian Journal of Educational Research, 2014
Problem: Practitioners working with multiple-choice tests have long utilized Item Response Theory (IRT) models to evaluate the performance of test items for quality assurance. The use of similar applications for performance tests, however, is often encumbered due to the challenges encountered in working with complicated data sets in which local…
Descriptors: Item Response Theory, Licensing Examinations (Professions), Performance Based Assessment, Computer Simulation
Jensen, Nate; Rice, Andrew; Soland, James – Educational Evaluation and Policy Analysis, 2018
While most educators assume that not all students try their best on achievement tests, no current research examines if behaviors associated with low test effort, like rapidly guessing on test items, affect teacher value-added estimates. In this article, we examined the prevalence of rapid guessing to determine if this behavior varied by grade,…
Descriptors: Item Response Theory, Value Added Models, Achievement Tests, Test Items
Wang, Chun; Fan, Zhewen; Chang, Hua-Hua; Douglas, Jeffrey A. – Journal of Educational and Behavioral Statistics, 2013
The item response times (RTs) collected from computerized testing represent an underutilized type of information about items and examinees. In addition to knowing the examinees' responses to each item, we can investigate the amount of time examinees spend on each item. Current models for RTs mainly focus on parametric models, which have the…
Descriptors: Reaction Time, Computer Assisted Testing, Test Items, Accuracy
Maris, Gunter; van der Maas, Han – Psychometrika, 2012
Starting from an explicit scoring rule for time limit tasks incorporating both response time and accuracy, and a definite trade-off between speed and accuracy, a response model is derived. Since the scoring rule is interpreted as a sufficient statistic, the model belongs to the exponential family. The various marginal and conditional distributions…
Descriptors: Item Response Theory, Scoring, Reaction Time, Accuracy
Previous Page | Next Page »
Pages: 1 | 2
Peer reviewed
Direct link
