Publication Date
In 2025 | 2 |
Since 2024 | 8 |
Since 2021 (last 5 years) | 52 |
Since 2016 (last 10 years) | 97 |
Since 2006 (last 20 years) | 151 |
Descriptor
Computer Assisted Testing | 190 |
Reaction Time | 190 |
Foreign Countries | 60 |
Test Items | 48 |
Comparative Analysis | 40 |
Task Analysis | 40 |
Accuracy | 39 |
Item Response Theory | 35 |
Scores | 31 |
Correlation | 30 |
Cognitive Processes | 25 |
More ▼ |
Source
Author
Wise, Steven L. | 7 |
Chang, Hua-Hua | 5 |
Chang, Shu-Ren | 3 |
Kong, Xiaojing | 3 |
Plake, Barbara S. | 3 |
Soland, James | 3 |
Wang, Chun | 3 |
Alderton, David L. | 2 |
Fan, Zhewen | 2 |
Ferdous, Abdullah A. | 2 |
Fox, Jean-Paul | 2 |
More ▼ |
Publication Type
Education Level
Audience
Researchers | 4 |
Location
Germany | 8 |
China | 5 |
Canada | 4 |
Finland | 4 |
Japan | 4 |
Australia | 3 |
Denmark | 3 |
France | 3 |
Ireland | 3 |
Netherlands | 3 |
Norway | 3 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Ebru Balta; Celal Deha Dogan – SAGE Open, 2024
As computer-based testing becomes more prevalent, the attention paid to response time (RT) in assessment practice and psychometric research correspondingly increases. This study explores the rate of Type I error in detecting preknowledge cheating behaviors, the power of the Kullback-Leibler (KL) divergence measure, and the L person fit statistic…
Descriptors: Cheating, Accuracy, Reaction Time, Computer Assisted Testing
He, Yinhong; Qi, Yuanyuan – Journal of Educational Measurement, 2023
In multidimensional computerized adaptive testing (MCAT), item selection strategies are generally constructed based on responses, and they do not consider the response times required by items. This study constructed two new criteria (referred to as DT-inc and DT) for MCAT item selection by utilizing information from response times. The new designs…
Descriptors: Reaction Time, Adaptive Testing, Computer Assisted Testing, Test Items
Shin, Jinnie; Guo, Qi; Morin, Maxim – Educational Measurement: Issues and Practice, 2023
With the increased restrictions on physical distancing due to the COVID-19 pandemic, remote proctoring has emerged as an alternative to traditional onsite proctoring to ensure the continuity of essential assessments, such as computer-based medical licensing exams. Recent literature has highlighted the significant impact of different proctoring…
Descriptors: Foreign Countries, High Stakes Tests, Computer Assisted Testing, Licensing Examinations (Professions)
Gorgun, Guher; Bulut, Okan – Large-scale Assessments in Education, 2023
In low-stakes assessment settings, students' performance is not only influenced by students' ability level but also their test-taking engagement. In computerized adaptive tests (CATs), disengaged responses (e.g., rapid guesses) that fail to reflect students' true ability levels may lead to the selection of less informative items and thereby…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Algorithms
Zhu, Hongyue; Jiao, Hong; Gao, Wei; Meng, Xiangbin – Journal of Educational and Behavioral Statistics, 2023
Change-point analysis (CPA) is a method for detecting abrupt changes in parameter(s) underlying a sequence of random variables. It has been applied to detect examinees' aberrant test-taking behavior by identifying abrupt test performance change. Previous studies utilized maximum likelihood estimations of ability parameters, focusing on detecting…
Descriptors: Bayesian Statistics, Test Wiseness, Behavior Problems, Reaction Time
Dongmei Li; Shalini Kapoor; Ann Arthur; Chi-Yu Huang; YoungWoo Cho; Chen Qiu; Hongling Wang – ACT Education Corp., 2025
Starting in April 2025, ACT will introduce enhanced forms of the ACT® test for national online testing, with a full rollout to all paper and online test takers in national, state and district, and international test administrations by Spring 2026. ACT introduced major updates by changing the test lengths and testing times, providing more time per…
Descriptors: College Entrance Examinations, Testing, Change, Scoring
Mi-Hyun Bang; Young-Min Lee – Education and Information Technologies, 2024
The Human Resources Development Service of Korea developed a digital exam for five representative engineering categories and conducted a pilot study comparing the findings with the paper-and-pencil exam results from the last three years. This study aimed to compare the test efficiency between digital and paper-and-pencil examinations. A digital…
Descriptors: Engineering Education, Computer Assisted Testing, Foreign Countries, Human Resources
Barnali Mazumdar; Nora De la Mora; Teresa Roberts; Alexander Swiderski; Maria Kapantzoglou; Gerasimos Fergadiotis – Journal of Speech, Language, and Hearing Research, 2024
Purpose: Anomia, or word-finding difficulty, is a prevalent and persistent feature of aphasia, a neurogenic language disorder affecting millions of people in the United States. Anomia assessments are essential for measuring performance and monitoring outcomes in clinical settings. This study aims to evaluate the reliability of response time (RT)…
Descriptors: Pictorial Stimuli, Naming, Aphasia, Reaction Time
Cheng, Ying; Shao, Can – Educational and Psychological Measurement, 2022
Computer-based and web-based testing have become increasingly popular in recent years. Their popularity has dramatically expanded the availability of response time data. Compared to the conventional item response data that are often dichotomous or polytomous, response time has the advantage of being continuous and can be collected in an…
Descriptors: Reaction Time, Test Wiseness, Computer Assisted Testing, Simulation
Rios, Joseph A.; Deng, Jiayi – Large-scale Assessments in Education, 2021
Background: In testing contexts that are predominately concerned with power, rapid guessing (RG) has the potential to undermine the validity of inferences made from educational assessments, as such responses are unreflective of the knowledge, skills, and abilities assessed. Given this concern, practitioners/researchers have utilized a multitude of…
Descriptors: Test Wiseness, Guessing (Tests), Reaction Time, Computer Assisted Testing
Nagy, Gabriel; Ulitzsch, Esther; Lindner, Marlit Annalena – Journal of Computer Assisted Learning, 2023
Background: Item response times in computerized assessments are frequently used to identify rapid guessing behaviour as a manifestation of response disengagement. However, non-rapid responses (i.e., with longer response times) are not necessarily engaged, which means that response-time-based procedures could overlook disengaged responses.…
Descriptors: Guessing (Tests), Academic Persistence, Learner Engagement, Computer Assisted Testing
Aaron McVay – ProQuest LLC, 2021
As assessments move towards computerized testing and making continuous testing available the need for rapid assembly of forms is increasing. The objective of this study was to investigate variability in assembled forms through the lens of first- and second-order equity properties of equating, by examining three factors and their interactions. Two…
Descriptors: Automation, Computer Assisted Testing, Test Items, Reaction Time
Dudschig, Carolin; Kaup, Barbara; Mackenzie, Ian Grant – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2023
Concerning the evolution of our mind, it is of core interest to understand how high-level cognitive functions are embedded within low-level cognitive functions. While the grounding of meaning units such as content words and sentence has been widely investigated, little is known about logical cognitive operations and their association with…
Descriptors: Decision Making, Task Analysis, Nonverbal Communication, Emotional Response
Ulitzsch, Esther; von Davier, Matthias; Pohl, Steffi – Educational and Psychological Measurement, 2020
So far, modeling approaches for not-reached items have considered one single underlying process. However, missing values at the end of a test can occur for a variety of reasons. On the one hand, examinees may not reach the end of a test due to time limits and lack of working speed. On the other hand, examinees may not attempt all items and quit…
Descriptors: Item Response Theory, Test Items, Response Style (Tests), Computer Assisted Testing
Christina Hubertina Helena Maria Heemskerk; Claudia M. Roebers – Journal of Cognition and Development, 2024
Young children tend to rely on reactive cognitive control (e.g. strongly slow down after an error), even when task accuracy would benefit from proactive cognitive control (taking a slower task approach up front). We investigated if giving young primary school children opportunities to repeatedly experience tasks where success rates depend on…
Descriptors: Cognitive Ability, Reaction Time, Accuracy, Feedback (Response)