NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Applied Measurement in…23
Audience
Teachers1
Location
Turkey1
Virginia1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 23 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Steven L. Wise; Megan R. Kuhfeld; Marlit Annalena Lindner – Applied Measurement in Education, 2024
When student achievement is assessed, we seek to elicit a student's maximum performance -- a goal requiring the assumption that the student is fully engaged. Otherwise, to the extent that disengagement occurs, test performance is likely to suffer. Effectively managing test-taking disengagement requires an understanding of the testing conditions…
Descriptors: Testing, Attention Span, Learner Engagement, Time Factors (Learning)
Peer reviewed Peer reviewed
Direct linkDirect link
Perkins, Beth A.; Pastor, Dena A.; Finney, Sara J. – Applied Measurement in Education, 2021
When tests are low stakes for examinees, meaning there are little to no personal consequences associated with test results, some examinees put little effort into their performance. To understand the causes and consequences of diminished effort, researchers correlate test-taking effort with other variables, such as test-taking emotions and test…
Descriptors: Response Style (Tests), Psychological Patterns, Testing, Differences
Peer reviewed Peer reviewed
Direct linkDirect link
Rutkowski, David; Rutkowski, Leslie; Valdivia, Dubravka Svetina; Canbolat, Yusuf; Underhill, Stephanie – Applied Measurement in Education, 2023
Several states in the US have removed time limits on their state assessments. In Indiana, where this study takes place, the state assessment is both untimed during the testing window and allows unlimited breaks during the testing session. Using grade 3 and 8 math and English state assessment data, in this paper we focus on time used for testing…
Descriptors: Testing, Time, Intervals, Academic Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Rios, Joseph A. – Applied Measurement in Education, 2022
Testing programs are confronted with the decision of whether to report individual scores for examinees that have engaged in rapid guessing (RG). As noted by the "Standards for Educational and Psychological Testing," this decision should be based on a documented criterion that determines score exclusion. To this end, a number of heuristic…
Descriptors: Testing, Guessing (Tests), Academic Ability, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Haladyna, Thomas M.; Rodriguez, Michael C.; Stevens, Craig – Applied Measurement in Education, 2019
The evidence is mounting regarding the guidance to employ more three-option multiple-choice items. From theoretical analyses, empirical results, and practical considerations, such items are of equal or higher quality than four- or five-option items, and more items can be administered to improve content coverage. This study looks at 58 tests,…
Descriptors: Multiple Choice Tests, Test Items, Testing Problems, Guessing (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Lozano, José H.; Revuelta, Javier – Applied Measurement in Education, 2021
The present study proposes a Bayesian approach for estimating and testing the operation-specific learning model, a variant of the linear logistic test model that allows for the measurement of the learning that occurs during a test as a result of the repeated use of the operations involved in the items. The advantages of using a Bayesian framework…
Descriptors: Bayesian Statistics, Computation, Learning, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Yiling Cheng; I-Chien Chen; Barbara Schneider; Mark Reckase; Joseph Krajcik – Applied Measurement in Education, 2024
The current study expands on previous research on gender differences and similarities in science test scores. Using three different approaches -- differential item functioning, differential distractor functioning, and decision tree analysis -- we examine a high school science assessment administered to 3,849 10th-12th graders, of whom 2,021 are…
Descriptors: Gender Differences, Science Achievement, Responses, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Barry, Carol L.; Finney, Sara J. – Applied Measurement in Education, 2016
We examined change in test-taking effort over the course of a three-hour, five test, low-stakes testing session. Latent growth modeling results indicated that change in test-taking effort was well-represented by a piecewise growth form, wherein effort increased from test 1 to test 4 and then decreased from test 4 to test 5. There was significant…
Descriptors: Change, Motivation, College Students, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Kopriva, Rebecca J. – Applied Measurement in Education, 2014
In this commentary, Rebecca Kopriva examines the articles in this special issue by drawing on her experience from three series of investigations examining how English language learners (ELLs) and other students perceive what test items ask and how they can successfully represent what they know. The first series examined the effect of different…
Descriptors: English Language Learners, Test Items, Educational Assessment, Access to Education
Peer reviewed Peer reviewed
Direct linkDirect link
Hayes, Heather; Embretson, Susan E. – Applied Measurement in Education, 2013
Online and on-demand tests are increasingly used in assessment. Although the main focus has been cheating and test security (e.g., Selwyn, 2008) the cross-setting equivalence of scores as a function of contrasting test conditions is also an issue that warrants attention. In this study, the impact of environmental and cognitive distractions, as…
Descriptors: College Students, Computer Assisted Testing, Problem Solving, Physical Environment
Peer reviewed Peer reviewed
Direct linkDirect link
Setzer, J. Carl; Wise, Steven L.; van den Heuvel, Jill R.; Ling, Guangming – Applied Measurement in Education, 2013
Assessment results collected under low-stakes testing situations are subject to effects of low examinee effort. The use of computer-based testing allows researchers to develop new ways of measuring examinee effort, particularly using response times. At the item level, responses can be classified as exhibiting either rapid-guessing behavior or…
Descriptors: Testing, Guessing (Tests), Reaction Time, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Çil, Emine – Applied Measurement in Education, 2015
Taking a test generally improves the retention of the material tested. This is a phenomenon commonly referred to as testing effect. The present research investigated whether two-tier diagnostic tests promoted student teachers' conceptual understanding of variables in conducting scientific experiments, which is a scientific process skill. In this…
Descriptors: Diagnostic Tests, Science Experiments, Comprehension, Science Process Skills
Peer reviewed Peer reviewed
Direct linkDirect link
Kong, Xiaojing; Davis, Laurie Laughlin; McBride, Yuanyuan; Morrison, Kristin – Applied Measurement in Education, 2018
Item response time data were used in investigating the differences in student test-taking behavior between two device conditions: computer and tablet. Analyses were conducted to address the questions of whether or not the device condition had a differential impact on rapid guessing and solution behaviors (with response time effort used as an…
Descriptors: Educational Technology, Technology Uses in Education, Computers, Handheld Devices
Peer reviewed Peer reviewed
Direct linkDirect link
Rutkowski, Leslie – Applied Measurement in Education, 2014
Large-scale assessment programs such as the National Assessment of Educational Progress (NAEP), Trends in International Mathematics and Science Study (TIMSS), and Programme for International Student Assessment (PISA) use a sophisticated assessment administration design called matrix sampling that minimizes the testing burden on individual…
Descriptors: Measurement, Testing, Item Sampling, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Meyers, Jason L.; Miller, G. Edward; Way, Walter D. – Applied Measurement in Education, 2009
In operational testing programs using item response theory (IRT), item parameter invariance is threatened when an item appears in a different location on the live test than it did when it was field tested. This study utilizes data from a large state's assessments to model change in Rasch item difficulty (RID) as a function of item position change,…
Descriptors: Test Items, Test Content, Testing Programs, Simulation
Previous Page | Next Page »
Pages: 1  |  2