Publication Date
In 2025 | 0 |
Since 2024 | 9 |
Descriptor
Item Response Theory | 5 |
Computer Assisted Testing | 4 |
Test Items | 4 |
High School Students | 3 |
Adaptive Testing | 2 |
Educational Testing | 2 |
Gender Differences | 2 |
Grade 11 | 2 |
Grade 8 | 2 |
Mathematics Tests | 2 |
Reaction Time | 2 |
More ▼ |
Source
Applied Measurement in… | 9 |
Author
Barbara Schneider | 1 |
Ben Backes | 1 |
Beyza Aksu-Dunya | 1 |
Brian F. French | 1 |
Christine E. DeMars | 1 |
I-Chien Chen | 1 |
James Cowan | 1 |
Joseph Krajcik | 1 |
Mark Reckase | 1 |
Marlit Annalena Lindner | 1 |
Megan R. Kuhfeld | 1 |
More ▼ |
Publication Type
Journal Articles | 9 |
Reports - Research | 9 |
Tests/Questionnaires | 1 |
Education Level
Secondary Education | 5 |
High Schools | 3 |
Elementary Education | 2 |
Grade 11 | 2 |
Grade 8 | 2 |
Junior High Schools | 2 |
Middle Schools | 2 |
Early Childhood Education | 1 |
Elementary Secondary Education | 1 |
Grade 10 | 1 |
Grade 12 | 1 |
More ▼ |
Audience
Location
Japan | 1 |
Massachusetts | 1 |
Virginia | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Massachusetts Comprehensive… | 1 |
National Assessment of… | 1 |
What Works Clearinghouse Rating
Stefanie A. Wind; Beyza Aksu-Dunya – Applied Measurement in Education, 2024
Careless responding is a pervasive concern in research using affective surveys. Although researchers have considered various methods for identifying careless responses, studies are limited that consider the utility of these methods in the context of computer adaptive testing (CAT) for affective scales. Using a simulation study informed by recent…
Descriptors: Response Style (Tests), Computer Assisted Testing, Adaptive Testing, Affective Measures
Sarah Alahmadi; Christine E. DeMars – Applied Measurement in Education, 2024
Large-scale educational assessments are sometimes considered low-stakes, increasing the possibility of confounding true performance level with low motivation. These concerns are amplified in remote testing conditions. To remove the effects of low effort levels in responses observed in remote low-stakes testing, several motivation filtering methods…
Descriptors: Multiple Choice Tests, Item Response Theory, College Students, Scores
Ben Backes; James Cowan – Applied Measurement in Education, 2024
We investigate two research questions using a recent statewide transition from paper to computer-based testing: first, the extent to which test mode effects found in prior studies can be eliminated; and second, the degree to which online and paper assessments offer different information about underlying student ability. We first find very small…
Descriptors: Computer Assisted Testing, Test Format, Differences, Academic Achievement
Steven L. Wise; Megan R. Kuhfeld; Marlit Annalena Lindner – Applied Measurement in Education, 2024
When student achievement is assessed, we seek to elicit a student's maximum performance -- a goal requiring the assumption that the student is fully engaged. Otherwise, to the extent that disengagement occurs, test performance is likely to suffer. Effectively managing test-taking disengagement requires an understanding of the testing conditions…
Descriptors: Testing, Attention Span, Learner Engagement, Time Factors (Learning)
Youn Seon Lim – Applied Measurement in Education, 2024
Educational testing has been criticized for its disconnect from modern cognitive science and its limited role in improving instruction and student learning. Reform efforts emphasize the need for testing to provide specific diagnostic insights into students' skills and knowledge. Cognitive diagnosis (CD), an emerging paradigm in educational…
Descriptors: Q Methodology, Matrices, Models, Design
Traditional vs Intersectional DIF Analysis: Considerations and a Comparison Using State Testing Data
Tony Albano; Brian F. French; Thao Thu Vo – Applied Measurement in Education, 2024
Recent research has demonstrated an intersectional approach to the study of differential item functioning (DIF). This approach expands DIF to account for the interactions between what have traditionally been treated as separate grouping variables. In this paper, we compare traditional and intersectional DIF analyses using data from a state testing…
Descriptors: Test Items, Item Analysis, Data Use, Standardized Tests
Yi-Hsuan Lee; Yue Jia – Applied Measurement in Education, 2024
Test-taking experience is a consequence of the interaction between students and assessment properties. We define a new notion, rapid-pacing behavior, to reflect two types of test-taking experience -- disengagement and speededness. To identify rapid-pacing behavior, we extend existing methods to develop response-time thresholds for individual items…
Descriptors: Adaptive Testing, Reaction Time, Item Response Theory, Test Format
Yiling Cheng; I-Chien Chen; Barbara Schneider; Mark Reckase; Joseph Krajcik – Applied Measurement in Education, 2024
The current study expands on previous research on gender differences and similarities in science test scores. Using three different approaches -- differential item functioning, differential distractor functioning, and decision tree analysis -- we examine a high school science assessment administered to 3,849 10th-12th graders, of whom 2,021 are…
Descriptors: Gender Differences, Science Achievement, Responses, Testing
Takahiro Terao – Applied Measurement in Education, 2024
This study aimed to compare item characteristics and response time between stimulus conditions in computer-delivered listening tests. Listening materials had three variants: regular videos, frame-by-frame videos, and only audios without visuals. Participants were 228 Japanese high school students who were requested to complete one of nine…
Descriptors: Computer Assisted Testing, Audiovisual Aids, Reaction Time, High School Students