NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L. – Applied Measurement in Education, 2020
In achievement testing there is typically a practical requirement that the set of items administered should be representative of some target content domain. This is accomplished by establishing test blueprints specifying the content constraints to be followed when selecting the items for a test. Sometimes, however, students give disengaged…
Descriptors: Test Items, Test Content, Achievement Tests, Guessing (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Abulela, Mohammed A. A.; Rios, Joseph A. – Applied Measurement in Education, 2022
When there are no personal consequences associated with test performance for examinees, rapid guessing (RG) is a concern and can differ between subgroups. To date, the impact of differential RG on item-level measurement invariance has received minimal attention. To that end, a simulation study was conducted to examine the robustness of the…
Descriptors: Comparative Analysis, Robustness (Statistics), Nonparametric Statistics, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Cohen, Dale J.; Zhang, Jin; Wothke, Werner – Applied Measurement in Education, 2019
Construct-irrelevant cognitive complexity of some items in the statewide grade-level assessments may impose performance barriers for students with disabilities who are ineligible for alternate assessments based on alternate achievement standards. This has spurred research into whether items can be modified to reduce complexity without affecting…
Descriptors: Test Items, Accessibility (for Disabled), Students with Disabilities, Low Achievement
Peer reviewed Peer reviewed
Direct linkDirect link
Traynor, Anne – Applied Measurement in Education, 2017
It has long been argued that U.S. states' differential performance on nationwide assessments may reflect differences in students' opportunity to learn the tested content that is primarily due to variation in curricular content standards, rather than in instructional quality or educational investment. To quantify the effect of differences in…
Descriptors: Test Items, Difficulty Level, State Standards, Academic Standards
Peer reviewed Peer reviewed
Tippets, Elizabeth; Benson, Jeri – Applied Measurement in Education, 1989
The effect of 3 item arrangements (easy to hard, hard to easy, and random) on test anxiety was studied using an actual classroom examination administered to 126 graduate students (36 males and 90 females) under power conditions. Results indicate that anxiety level and test item arrangement are related. (TJH)
Descriptors: Achievement Tests, Difficulty Level, Graduate Students, Higher Education
Peer reviewed Peer reviewed
Green, Donald Ross; And Others – Applied Measurement in Education, 1989
Potential benefits of using item response theory in test construction are evaluated using the experience and evidence accumulated during nine years of using a three-parameter model in the development of major achievement batteries. Topics addressed include error of measurement, test equating, item bias, and item difficulty. (TJH)
Descriptors: Achievement Tests, Computer Assisted Testing, Difficulty Level, Equated Scores
Peer reviewed Peer reviewed
Barnes, Laura L. B.; Wise, Steven L. – Applied Measurement in Education, 1991
One-parameter and three-parameter item response theory (IRT) model estimates were compared with estimates obtained from two modified one-parameter models that incorporated a constant nonzero guessing parameter. Using small-sample simulation data (50, 100, and 200 simulated examinees), modified 1-parameter models were most effective in estimating…
Descriptors: Ability, Achievement Tests, Comparative Analysis, Computer Simulation