NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20245
Since 2021 (last 5 years)13
Since 2016 (last 10 years)21
Since 2006 (last 20 years)21
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 21 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Gregory M. Hurtz; Regi Mucino – Journal of Educational Measurement, 2024
The Lognormal Response Time (LNRT) model measures the speed of test-takers relative to the normative time demands of items on a test. The resulting speed parameters and model residuals are often analyzed for evidence of anomalous test-taking behavior associated with fast and poorly fitting response time patterns. Extending this model, we…
Descriptors: Student Reaction, Reaction Time, Response Style (Tests), Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Man, Kaiwen; Harring, Jeffrey R. – Educational and Psychological Measurement, 2023
Preknowledge cheating jeopardizes the validity of inferences based on test results. Many methods have been developed to detect preknowledge cheating by jointly analyzing item responses and response times. Gaze fixations, an essential eye-tracker measure, can be utilized to help detect aberrant testing behavior with improved accuracy beyond using…
Descriptors: Cheating, Reaction Time, Test Items, Responses
Peer reviewed Peer reviewed
Direct linkDirect link
Ella Anghel; Lale Khorramdel; Matthias von Davier – Large-scale Assessments in Education, 2024
As the use of process data in large-scale educational assessments is becoming more common, it is clear that data on examinees' test-taking behaviors can illuminate their performance, and can have crucial ramifications concerning assessments' validity. A thorough review of the literature in the field may inform researchers and practitioners of…
Descriptors: Educational Assessment, Test Validity, Test Items, Reaction Time
Peer reviewed Peer reviewed
Direct linkDirect link
Man, Kaiwen; Harring, Jeffrey R. – Educational and Psychological Measurement, 2021
Many approaches have been proposed to jointly analyze item responses and response times to understand behavioral differences between normally and aberrantly behaved test-takers. Biometric information, such as data from eye trackers, can be used to better identify these deviant testing behaviors in addition to more conventional data types. Given…
Descriptors: Cheating, Item Response Theory, Reaction Time, Eye Movements
Peer reviewed Peer reviewed
Direct linkDirect link
Pools, Elodie – Applied Measurement in Education, 2022
Many low-stakes assessments, such as international large-scale surveys, are administered during time-limited testing sessions and some test-takers are not able to endorse the last items of the test, resulting in not-reached (NR) items. However, because the test has no consequence for the respondents, these NR items can also stem from quitting the…
Descriptors: Achievement Tests, Foreign Countries, International Assessment, Secondary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Kuang, Huan; Sahin, Fusun – Large-scale Assessments in Education, 2023
Background: Examinees may not make enough effort when responding to test items if the assessment has no consequence for them. These disengaged responses can be problematic in low-stakes, large-scale assessments because they can bias item parameter estimates. However, the amount of bias, and whether this bias is similar across administrations, is…
Descriptors: Test Items, Comparative Analysis, Mathematics Tests, Reaction Time
Peer reviewed Peer reviewed
Direct linkDirect link
Deribo, Tobias; Goldhammer, Frank; Kroehne, Ulf – Educational and Psychological Measurement, 2023
As researchers in the social sciences, we are often interested in studying not directly observable constructs through assessments and questionnaires. But even in a well-designed and well-implemented study, rapid-guessing behavior may occur. Under rapid-guessing behavior, a task is skimmed shortly but not read and engaged with in-depth. Hence, a…
Descriptors: Reaction Time, Guessing (Tests), Behavior Patterns, Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Martin, Jessica L.; Zamboanga, Byron L.; Haase, Richard F.; Buckner, Lindsay C. – Measurement and Evaluation in Counseling and Development, 2020
The purpose of this study was to assess measurement equivalence of the 15-item Protective Behavioral Strategies Scale (PBSS) across White and Black college students. Results partially supported measurement equivalence across racial groups. Clinicians and researchers should be cautious in using the PBSS to make comparisons between White and Black…
Descriptors: Likert Scales, White Students, African American Students, Drinking
Peer reviewed Peer reviewed
Direct linkDirect link
Susu Zhang; Xueying Tang; Qiwei He; Jingchen Liu; Zhiliang Ying – Grantee Submission, 2024
Computerized assessments and interactive simulation tasks are increasingly popular and afford the collection of process data, i.e., an examinee's sequence of actions (e.g., clickstreams, keystrokes) that arises from interactions with each task. Action sequence data contain rich information on the problem-solving process but are in a nonstandard,…
Descriptors: Correlation, Problem Solving, Computer Assisted Testing, Prediction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zari Saeedi; Hessameddin Ghanbar; Mahdi Rezaei – International Journal of Language Testing, 2024
Despite being a popular topic in language testing, cognitive load has not received enough attention in vocabulary test items. The purpose of the current study was to scrutinize the cognitive load and vocabulary test items' differences, examinees' reaction times, and perceived difficulty. To this end, 150 students were selected using…
Descriptors: Language Tests, Test Items, Difficulty Level, Vocabulary Development
Peer reviewed Peer reviewed
Direct linkDirect link
Pavel Chernyavskiy; Traci S. Kutaka; Carson Keeter; Julie Sarama; Douglas Clements – Grantee Submission, 2024
When researchers code behavior that is undetectable or falls outside of the validated ordinal scale, the resultant outcomes often suffer from informative missingness. Incorrect analysis of such data can lead to biased arguments around efficacy and effectiveness in the context of experimental and intervention research. Here, we detail a new…
Descriptors: Bayesian Statistics, Mathematics Instruction, Learning Trajectories, Item Response Theory
Jing Lu; Chun Wang; Jiwei Zhang; Xue Wang – Grantee Submission, 2023
Changepoints are abrupt variations in a sequence of data in statistical inference. In educational and psychological assessments, it is pivotal to properly differentiate examinees' aberrant behaviors from solution behavior to ensure test reliability and validity. In this paper, we propose a sequential Bayesian changepoint detection algorithm to…
Descriptors: Bayesian Statistics, Behavior Patterns, Computer Assisted Testing, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, HyeSun; Smith, Weldon Z. – Educational and Psychological Measurement, 2020
Based on the framework of testlet models, the current study suggests the Bayesian random block item response theory (BRB IRT) model to fit forced-choice formats where an item block is composed of three or more items. To account for local dependence among items within a block, the BRB IRT model incorporated a random block effect into the response…
Descriptors: Bayesian Statistics, Item Response Theory, Monte Carlo Methods, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Pastor, Dena A.; Ong, Thai Q.; Strickman, Scott N. – Educational Assessment, 2019
The trustworthiness of low-stakes assessment results largely depends on examinee effort, which can be measured by the amount of time examinees devote to items using solution behavior (SB) indices. Because SB indices are calculated for each item, they can be used to understand how examinee motivation changes across items within a test. Latent class…
Descriptors: Behavior Patterns, Test Items, Time, Response Style (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Mason, Rihana S.; Bass, Lori A. – Early Education and Development, 2020
Research Findings Research suggests children from low-income environments have vocabularies that differ from those of their higher-income peers. They may have basic knowledge of many words of which children from higher income environments have acquired sub- or supra-ordinate knowledge. This study sought to determine if children from low-income…
Descriptors: Receptive Language, Disadvantaged Environment, Vocabulary Development, Standardized Tests
Previous Page | Next Page ยป
Pages: 1  |  2