Publication Date
In 2025 | 0 |
Since 2024 | 4 |
Since 2021 (last 5 years) | 10 |
Since 2016 (last 10 years) | 18 |
Since 2006 (last 20 years) | 18 |
Descriptor
Behavior Patterns | 21 |
Test Items | 21 |
Foreign Countries | 8 |
Reaction Time | 8 |
Item Analysis | 5 |
Scores | 5 |
Bayesian Statistics | 3 |
Comparative Analysis | 3 |
Difficulty Level | 3 |
Eye Movements | 3 |
Information Technology | 3 |
More ▼ |
Source
Author
Harring, Jeffrey R. | 2 |
Man, Kaiwen | 2 |
Bass, Lori A. | 1 |
Bateson, David J. | 1 |
Batty, Aaron Olaf | 1 |
Bennett, Randy | 1 |
Buckner, Lindsay C. | 1 |
Caliskan, Nihat | 1 |
Cetin, Munevver | 1 |
Deane, Paul | 1 |
Deribo, Tobias | 1 |
More ▼ |
Publication Type
Journal Articles | 21 |
Reports - Research | 19 |
Information Analyses | 1 |
Reports - Evaluative | 1 |
Education Level
Higher Education | 7 |
Postsecondary Education | 7 |
Secondary Education | 4 |
Elementary Education | 2 |
Early Childhood Education | 1 |
High Schools | 1 |
Kindergarten | 1 |
Primary Education | 1 |
Audience
Location
Japan | 2 |
Canada | 1 |
Germany | 1 |
Iran (Tehran) | 1 |
Ireland | 1 |
Netherlands | 1 |
Turkey | 1 |
United Kingdom (England) | 1 |
United Kingdom (Northern… | 1 |
United States | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 2 |
Clinical Evaluation of… | 1 |
Peabody Picture Vocabulary… | 1 |
Program for the International… | 1 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Gregory M. Hurtz; Regi Mucino – Journal of Educational Measurement, 2024
The Lognormal Response Time (LNRT) model measures the speed of test-takers relative to the normative time demands of items on a test. The resulting speed parameters and model residuals are often analyzed for evidence of anomalous test-taking behavior associated with fast and poorly fitting response time patterns. Extending this model, we…
Descriptors: Student Reaction, Reaction Time, Response Style (Tests), Test Items
Man, Kaiwen; Harring, Jeffrey R. – Educational and Psychological Measurement, 2023
Preknowledge cheating jeopardizes the validity of inferences based on test results. Many methods have been developed to detect preknowledge cheating by jointly analyzing item responses and response times. Gaze fixations, an essential eye-tracker measure, can be utilized to help detect aberrant testing behavior with improved accuracy beyond using…
Descriptors: Cheating, Reaction Time, Test Items, Responses
Ella Anghel; Lale Khorramdel; Matthias von Davier – Large-scale Assessments in Education, 2024
As the use of process data in large-scale educational assessments is becoming more common, it is clear that data on examinees' test-taking behaviors can illuminate their performance, and can have crucial ramifications concerning assessments' validity. A thorough review of the literature in the field may inform researchers and practitioners of…
Descriptors: Educational Assessment, Test Validity, Test Items, Reaction Time
Man, Kaiwen; Harring, Jeffrey R. – Educational and Psychological Measurement, 2021
Many approaches have been proposed to jointly analyze item responses and response times to understand behavioral differences between normally and aberrantly behaved test-takers. Biometric information, such as data from eye trackers, can be used to better identify these deviant testing behaviors in addition to more conventional data types. Given…
Descriptors: Cheating, Item Response Theory, Reaction Time, Eye Movements
Pools, Elodie – Applied Measurement in Education, 2022
Many low-stakes assessments, such as international large-scale surveys, are administered during time-limited testing sessions and some test-takers are not able to endorse the last items of the test, resulting in not-reached (NR) items. However, because the test has no consequence for the respondents, these NR items can also stem from quitting the…
Descriptors: Achievement Tests, Foreign Countries, International Assessment, Secondary School Students
Kuang, Huan; Sahin, Fusun – Large-scale Assessments in Education, 2023
Background: Examinees may not make enough effort when responding to test items if the assessment has no consequence for them. These disengaged responses can be problematic in low-stakes, large-scale assessments because they can bias item parameter estimates. However, the amount of bias, and whether this bias is similar across administrations, is…
Descriptors: Test Items, Comparative Analysis, Mathematics Tests, Reaction Time
Deribo, Tobias; Goldhammer, Frank; Kroehne, Ulf – Educational and Psychological Measurement, 2023
As researchers in the social sciences, we are often interested in studying not directly observable constructs through assessments and questionnaires. But even in a well-designed and well-implemented study, rapid-guessing behavior may occur. Under rapid-guessing behavior, a task is skimmed shortly but not read and engaged with in-depth. Hence, a…
Descriptors: Reaction Time, Guessing (Tests), Behavior Patterns, Bias
Martin, Jessica L.; Zamboanga, Byron L.; Haase, Richard F.; Buckner, Lindsay C. – Measurement and Evaluation in Counseling and Development, 2020
The purpose of this study was to assess measurement equivalence of the 15-item Protective Behavioral Strategies Scale (PBSS) across White and Black college students. Results partially supported measurement equivalence across racial groups. Clinicians and researchers should be cautious in using the PBSS to make comparisons between White and Black…
Descriptors: Likert Scales, White Students, African American Students, Drinking
Susu Zhang; Xueying Tang; Qiwei He; Jingchen Liu; Zhiliang Ying – Grantee Submission, 2024
Computerized assessments and interactive simulation tasks are increasingly popular and afford the collection of process data, i.e., an examinee's sequence of actions (e.g., clickstreams, keystrokes) that arises from interactions with each task. Action sequence data contain rich information on the problem-solving process but are in a nonstandard,…
Descriptors: Correlation, Problem Solving, Computer Assisted Testing, Prediction
Zari Saeedi; Hessameddin Ghanbar; Mahdi Rezaei – International Journal of Language Testing, 2024
Despite being a popular topic in language testing, cognitive load has not received enough attention in vocabulary test items. The purpose of the current study was to scrutinize the cognitive load and vocabulary test items' differences, examinees' reaction times, and perceived difficulty. To this end, 150 students were selected using…
Descriptors: Language Tests, Test Items, Difficulty Level, Vocabulary Development
Lee, HyeSun; Smith, Weldon Z. – Educational and Psychological Measurement, 2020
Based on the framework of testlet models, the current study suggests the Bayesian random block item response theory (BRB IRT) model to fit forced-choice formats where an item block is composed of three or more items. To account for local dependence among items within a block, the BRB IRT model incorporated a random block effect into the response…
Descriptors: Bayesian Statistics, Item Response Theory, Monte Carlo Methods, Test Format
Pastor, Dena A.; Ong, Thai Q.; Strickman, Scott N. – Educational Assessment, 2019
The trustworthiness of low-stakes assessment results largely depends on examinee effort, which can be measured by the amount of time examinees devote to items using solution behavior (SB) indices. Because SB indices are calculated for each item, they can be used to understand how examinee motivation changes across items within a test. Latent class…
Descriptors: Behavior Patterns, Test Items, Time, Response Style (Tests)
Mason, Rihana S.; Bass, Lori A. – Early Education and Development, 2020
Research Findings Research suggests children from low-income environments have vocabularies that differ from those of their higher-income peers. They may have basic knowledge of many words of which children from higher income environments have acquired sub- or supra-ordinate knowledge. This study sought to determine if children from low-income…
Descriptors: Receptive Language, Disadvantaged Environment, Vocabulary Development, Standardized Tests
Caliskan, Nihat; Kuzu, Okan; Kuzu, Yasemin – Journal of Education and Learning, 2017
The purpose of this study was to develop a rating scale that can be used to evaluate behavior patterns of the organization people pattern of preservice teachers (PSTs). By reviewing the related literature on people patterns, a preliminary scale of 38 items with a five-points Likert type was prepared. The number of items was reduced to 29 after…
Descriptors: Foreign Countries, Behavior Rating Scales, Test Construction, Preservice Teachers
Lee, Mi Yeon; Lim, Woong – International Electronic Journal of Mathematics Education, 2020
This study investigates patterns exhibited by pre-service teachers (PSTs) while practicing feedback in response to students' solutions on a procedure-based mathematics assessment. First, we developed an analytical framework for understanding mathematics PSTs' written feedback. Second, we looked into how a learning module on a multimedia platform…
Descriptors: Preservice Teachers, Feedback (Response), Test Items, Behavior Patterns
Previous Page | Next Page ยป
Pages: 1 | 2