Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 6 |
Since 2016 (last 10 years) | 9 |
Since 2006 (last 20 years) | 16 |
Descriptor
Testing | 23 |
Test Items | 9 |
Comparative Analysis | 5 |
Psychometrics | 5 |
Guessing (Tests) | 4 |
College Students | 3 |
Computer Assisted Testing | 3 |
Difficulty Level | 3 |
Gender Differences | 3 |
High School Students | 3 |
Item Response Theory | 3 |
More ▼ |
Source
Applied Measurement in… | 23 |
Author
Publication Type
Journal Articles | 23 |
Reports - Research | 16 |
Reports - Evaluative | 6 |
Tests/Questionnaires | 3 |
Collected Works - General | 1 |
Information Analyses | 1 |
Opinion Papers | 1 |
Education Level
Audience
Teachers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of… | 2 |
Iowa Tests of Educational… | 1 |
Major Field Achievement Test… | 1 |
Program for International… | 1 |
Progress in International… | 1 |
Trends in International… | 1 |
What Works Clearinghouse Rating
Steven L. Wise; Megan R. Kuhfeld; Marlit Annalena Lindner – Applied Measurement in Education, 2024
When student achievement is assessed, we seek to elicit a student's maximum performance -- a goal requiring the assumption that the student is fully engaged. Otherwise, to the extent that disengagement occurs, test performance is likely to suffer. Effectively managing test-taking disengagement requires an understanding of the testing conditions…
Descriptors: Testing, Attention Span, Learner Engagement, Time Factors (Learning)
Perkins, Beth A.; Pastor, Dena A.; Finney, Sara J. – Applied Measurement in Education, 2021
When tests are low stakes for examinees, meaning there are little to no personal consequences associated with test results, some examinees put little effort into their performance. To understand the causes and consequences of diminished effort, researchers correlate test-taking effort with other variables, such as test-taking emotions and test…
Descriptors: Response Style (Tests), Psychological Patterns, Testing, Differences
Rutkowski, David; Rutkowski, Leslie; Valdivia, Dubravka Svetina; Canbolat, Yusuf; Underhill, Stephanie – Applied Measurement in Education, 2023
Several states in the US have removed time limits on their state assessments. In Indiana, where this study takes place, the state assessment is both untimed during the testing window and allows unlimited breaks during the testing session. Using grade 3 and 8 math and English state assessment data, in this paper we focus on time used for testing…
Descriptors: Testing, Time, Intervals, Academic Achievement
Rios, Joseph A. – Applied Measurement in Education, 2022
Testing programs are confronted with the decision of whether to report individual scores for examinees that have engaged in rapid guessing (RG). As noted by the "Standards for Educational and Psychological Testing," this decision should be based on a documented criterion that determines score exclusion. To this end, a number of heuristic…
Descriptors: Testing, Guessing (Tests), Academic Ability, Scores
Haladyna, Thomas M.; Rodriguez, Michael C.; Stevens, Craig – Applied Measurement in Education, 2019
The evidence is mounting regarding the guidance to employ more three-option multiple-choice items. From theoretical analyses, empirical results, and practical considerations, such items are of equal or higher quality than four- or five-option items, and more items can be administered to improve content coverage. This study looks at 58 tests,…
Descriptors: Multiple Choice Tests, Test Items, Testing Problems, Guessing (Tests)
Lozano, José H.; Revuelta, Javier – Applied Measurement in Education, 2021
The present study proposes a Bayesian approach for estimating and testing the operation-specific learning model, a variant of the linear logistic test model that allows for the measurement of the learning that occurs during a test as a result of the repeated use of the operations involved in the items. The advantages of using a Bayesian framework…
Descriptors: Bayesian Statistics, Computation, Learning, Testing
Yiling Cheng; I-Chien Chen; Barbara Schneider; Mark Reckase; Joseph Krajcik – Applied Measurement in Education, 2024
The current study expands on previous research on gender differences and similarities in science test scores. Using three different approaches -- differential item functioning, differential distractor functioning, and decision tree analysis -- we examine a high school science assessment administered to 3,849 10th-12th graders, of whom 2,021 are…
Descriptors: Gender Differences, Science Achievement, Responses, Testing
Barry, Carol L.; Finney, Sara J. – Applied Measurement in Education, 2016
We examined change in test-taking effort over the course of a three-hour, five test, low-stakes testing session. Latent growth modeling results indicated that change in test-taking effort was well-represented by a piecewise growth form, wherein effort increased from test 1 to test 4 and then decreased from test 4 to test 5. There was significant…
Descriptors: Change, Motivation, College Students, Models
Kopriva, Rebecca J. – Applied Measurement in Education, 2014
In this commentary, Rebecca Kopriva examines the articles in this special issue by drawing on her experience from three series of investigations examining how English language learners (ELLs) and other students perceive what test items ask and how they can successfully represent what they know. The first series examined the effect of different…
Descriptors: English Language Learners, Test Items, Educational Assessment, Access to Education
Hayes, Heather; Embretson, Susan E. – Applied Measurement in Education, 2013
Online and on-demand tests are increasingly used in assessment. Although the main focus has been cheating and test security (e.g., Selwyn, 2008) the cross-setting equivalence of scores as a function of contrasting test conditions is also an issue that warrants attention. In this study, the impact of environmental and cognitive distractions, as…
Descriptors: College Students, Computer Assisted Testing, Problem Solving, Physical Environment
Setzer, J. Carl; Wise, Steven L.; van den Heuvel, Jill R.; Ling, Guangming – Applied Measurement in Education, 2013
Assessment results collected under low-stakes testing situations are subject to effects of low examinee effort. The use of computer-based testing allows researchers to develop new ways of measuring examinee effort, particularly using response times. At the item level, responses can be classified as exhibiting either rapid-guessing behavior or…
Descriptors: Testing, Guessing (Tests), Reaction Time, Test Items
Çil, Emine – Applied Measurement in Education, 2015
Taking a test generally improves the retention of the material tested. This is a phenomenon commonly referred to as testing effect. The present research investigated whether two-tier diagnostic tests promoted student teachers' conceptual understanding of variables in conducting scientific experiments, which is a scientific process skill. In this…
Descriptors: Diagnostic Tests, Science Experiments, Comprehension, Science Process Skills
Kong, Xiaojing; Davis, Laurie Laughlin; McBride, Yuanyuan; Morrison, Kristin – Applied Measurement in Education, 2018
Item response time data were used in investigating the differences in student test-taking behavior between two device conditions: computer and tablet. Analyses were conducted to address the questions of whether or not the device condition had a differential impact on rapid guessing and solution behaviors (with response time effort used as an…
Descriptors: Educational Technology, Technology Uses in Education, Computers, Handheld Devices
Rutkowski, Leslie – Applied Measurement in Education, 2014
Large-scale assessment programs such as the National Assessment of Educational Progress (NAEP), Trends in International Mathematics and Science Study (TIMSS), and Programme for International Student Assessment (PISA) use a sophisticated assessment administration design called matrix sampling that minimizes the testing burden on individual…
Descriptors: Measurement, Testing, Item Sampling, Computation
Meyers, Jason L.; Miller, G. Edward; Way, Walter D. – Applied Measurement in Education, 2009
In operational testing programs using item response theory (IRT), item parameter invariance is threatened when an item appears in a different location on the live test than it did when it was field tested. This study utilizes data from a large state's assessments to model change in Rasch item difficulty (RID) as a function of item position change,…
Descriptors: Test Items, Test Content, Testing Programs, Simulation
Previous Page | Next Page »
Pages: 1 | 2