Publication Date
| In 2026 | 0 |
| Since 2025 | 1 |
| Since 2022 (last 5 years) | 5 |
| Since 2017 (last 10 years) | 8 |
| Since 2007 (last 20 years) | 11 |
Descriptor
| Comparative Analysis | 11 |
| Reaction Time | 11 |
| Test Items | 11 |
| Foreign Countries | 6 |
| Scores | 6 |
| Item Response Theory | 4 |
| Accuracy | 3 |
| Achievement Tests | 3 |
| Computer Assisted Testing | 3 |
| Correlation | 3 |
| Difficulty Level | 3 |
| More ▼ | |
Source
Author
| Akhtar, Hanif | 1 |
| Ali, Usama S. | 1 |
| Alpayar, Cagla | 1 |
| Ames, Allison J. | 1 |
| Ann Arthur | 1 |
| Bowden, Harriet Wood | 1 |
| Chang, Hua-Hua | 1 |
| Chen Qiu | 1 |
| Chi-Yu Huang | 1 |
| Deribo, Tobias | 1 |
| Dongmei Li | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 11 |
| Journal Articles | 9 |
| Speeches/Meeting Papers | 1 |
Education Level
| Higher Education | 6 |
| Postsecondary Education | 6 |
| Grade 3 | 1 |
| Middle Schools | 1 |
| Secondary Education | 1 |
Audience
Location
| Germany | 1 |
| Indonesia | 1 |
| Taiwan | 1 |
| Turkey | 1 |
| Turkey (Ankara) | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| ACT Assessment | 1 |
| Program for International… | 1 |
| Test of English for… | 1 |
What Works Clearinghouse Rating
Dongmei Li; Shalini Kapoor; Ann Arthur; Chi-Yu Huang; YoungWoo Cho; Chen Qiu; Hongling Wang – ACT Education Corp., 2025
Starting in April 2025, ACT will introduce enhanced forms of the ACT® test for national online testing, with a full rollout to all paper and online test takers in national, state and district, and international test administrations by Spring 2026. ACT introduced major updates by changing the test lengths and testing times, providing more time per…
Descriptors: College Entrance Examinations, Testing, Change, Scoring
Kuang, Huan; Sahin, Fusun – Large-scale Assessments in Education, 2023
Background: Examinees may not make enough effort when responding to test items if the assessment has no consequence for them. These disengaged responses can be problematic in low-stakes, large-scale assessments because they can bias item parameter estimates. However, the amount of bias, and whether this bias is similar across administrations, is…
Descriptors: Test Items, Comparative Analysis, Mathematics Tests, Reaction Time
Deribo, Tobias; Goldhammer, Frank; Kroehne, Ulf – Educational and Psychological Measurement, 2023
As researchers in the social sciences, we are often interested in studying not directly observable constructs through assessments and questionnaires. But even in a well-designed and well-implemented study, rapid-guessing behavior may occur. Under rapid-guessing behavior, a task is skimmed shortly but not read and engaged with in-depth. Hence, a…
Descriptors: Reaction Time, Guessing (Tests), Behavior Patterns, Bias
Akhtar, Hanif – International Association for Development of the Information Society, 2022
When examinees perceive a test as low stakes, it is logical to assume that some of them will not put out their maximum effort. This condition makes the validity of the test results more complicated. Although many studies have investigated motivational fluctuation across tests during a testing session, only a small number of studies have…
Descriptors: Intelligence Tests, Student Motivation, Test Validity, Student Attitudes
Ames, Allison J. – Educational and Psychological Measurement, 2022
Individual response style behaviors, unrelated to the latent trait of interest, may influence responses to ordinal survey items. Response style can introduce bias in the total score with respect to the trait of interest, threatening valid interpretation of scores. Despite claims of response style stability across scales, there has been little…
Descriptors: Response Style (Tests), Individual Differences, Scores, Test Items
Türkoguz, Suat – Anatolian Journal of Education, 2020
This study aimed to investigate the item "Response Time Fidelity scores" ("RTFs"), "KuderRichardson Reliability" ("KR[subscript 20]") and "Cronbach's Alpha Reliability" ("alpha") coefficients, calculate "KR[subscript 20]" coefficients with "RTFs" for 30 threshold…
Descriptors: Comparative Analysis, Reaction Time, Multiple Choice Tests, Scores
Alpayar, Cagla; Gulleroglu, H. Deniz – Educational Research and Reviews, 2017
The aim of this research is to determine whether students' test performance and approaches to test questions change based on the type of mathematics questions (visual or verbal) administered to them. This research is based on a mixed-design model. The quantitative data are gathered from 297 seventh grade students, attending seven different middle…
Descriptors: Foreign Countries, Middle School Students, Grade 7, Student Evaluation
Sieh, Yu-cheng – Taiwan Journal of TESOL, 2016
In an attempt to compare how orthography and phonology interact in EFL learners with different reading abilities, online measures were administered in this study to two groups of university learners, indexed by their reading scores on the Test of English for International Communication (TOEIC). In terms of "accuracy," the less-skilled…
Descriptors: Comparative Analysis, Word Recognition, Phonology, English (Second Language)
Jensen, Nate; Rice, Andrew; Soland, James – Educational Evaluation and Policy Analysis, 2018
While most educators assume that not all students try their best on achievement tests, no current research examines if behaviors associated with low test effort, like rapidly guessing on test items, affect teacher value-added estimates. In this article, we examined the prevalence of rapid guessing to determine if this behavior varied by grade,…
Descriptors: Item Response Theory, Value Added Models, Achievement Tests, Test Items
Ali, Usama S.; Chang, Hua-Hua – ETS Research Report Series, 2014
Adaptive testing is advantageous in that it provides more efficient ability estimates with fewer items than linear testing does. Item-driven adaptive pretesting may also offer similar advantages, and verification of such a hypothesis about item calibration was the main objective of this study. A suitability index (SI) was introduced to adaptively…
Descriptors: Adaptive Testing, Simulation, Pretests Posttests, Test Items
Lado, Beatriz; Bowden, Harriet Wood; Stafford, Catherine A; Sanz, Cristina – Language Teaching Research, 2014
The current study compared the effectiveness of computer-delivered task-essential practice coupled with feedback consisting of (1) negative evidence with metalinguistic information (NE+MI) or (2) negative evidence without metalinguistic information (NE-MI) in promoting absolute beginners' (n = 58) initial learning of aspects of Latin…
Descriptors: Second Language Learning, Accuracy, Morphology (Languages), Syntax

Peer reviewed
Direct link
