Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 8 |
Descriptor
Models | 10 |
Test Items | 10 |
Item Response Theory | 4 |
Scores | 3 |
Computer Assisted Testing | 2 |
Diagnostic Tests | 2 |
Educational Assessment | 2 |
Statistical Analysis | 2 |
Test Validity | 2 |
Academic Standards | 1 |
Achievement Tests | 1 |
More ▼ |
Source
Educational Measurement:… | 10 |
Author
Gierl, Mark J. | 3 |
Lai, Hollis | 2 |
Ames, Allison | 1 |
Anderson, Daniel | 1 |
Daniel M Bolt | 1 |
Harring, Jeffrey R. | 1 |
Hiscox, Michael D. | 1 |
Irvin, P. Shawn | 1 |
Johnson, Tessa L. | 1 |
Leventhal, Brian | 1 |
Penfield, Randall David | 1 |
More ▼ |
Publication Type
Journal Articles | 10 |
Reports - Descriptive | 5 |
Reports - Research | 3 |
Guides - Non-Classroom | 1 |
Reports - Evaluative | 1 |
Education Level
Audience
Location
Oregon | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Xiangyi Liao; Daniel M Bolt – Educational Measurement: Issues and Practice, 2024
Traditional approaches to the modeling of multiple-choice item response data (e.g., 3PL, 4PL models) emphasize slips and guesses as random events. In this paper, an item response model is presented that characterizes both disjunctively interacting guessing and conjunctively interacting slipping processes as proficiency-related phenomena. We show…
Descriptors: Item Response Theory, Test Items, Error Correction, Guessing (Tests)
Leventhal, Brian; Ames, Allison – Educational Measurement: Issues and Practice, 2020
In this digital ITEMS module, Dr. Brian Leventhal and Dr. Allison Ames provide an overview of "Monte Carlo simulation studies" (MCSS) in "item response theory" (IRT). MCSS are utilized for a variety of reasons, one of the most compelling being that they can be used when analytic solutions are impractical or nonexistent because…
Descriptors: Item Response Theory, Monte Carlo Methods, Simulation, Test Items
Anderson, Daniel; Rowley, Brock; Stegenga, Sondra; Irvin, P. Shawn; Rosenberg, Joshua M. – Educational Measurement: Issues and Practice, 2020
Validity evidence based on test content is critical to meaningful interpretation of test scores. Within high-stakes testing and accountability frameworks, content-related validity evidence is typically gathered via alignment studies, with panels of experts providing qualitative judgments on the degree to which test items align with the…
Descriptors: Content Validity, Artificial Intelligence, Test Items, Vocabulary
Harring, Jeffrey R.; Johnson, Tessa L. – Educational Measurement: Issues and Practice, 2020
In this digital ITEMS module, Dr. Jeffrey Harring and Ms. Tessa Johnson introduce the linear mixed effects (LME) model as a flexible general framework for simultaneously modeling continuous repeated measures data with a scientifically defensible function that adequately summarizes both individual change as well as the average response. The module…
Descriptors: Educational Assessment, Data Analysis, Longitudinal Studies, Case Studies
Rubright, Jonathan D. – Educational Measurement: Issues and Practice, 2018
Performance assessments, scenario-based tasks, and other groups of items carry a risk of violating the local item independence assumption made by unidimensional item response theory (IRT) models. Previous studies have identified negative impacts of ignoring such violations, most notably inflated reliability estimates. Still, the influence of this…
Descriptors: Performance Based Assessment, Item Response Theory, Models, Test Reliability
Penfield, Randall David – Educational Measurement: Issues and Practice, 2014
A polytomous item is one for which the responses are scored according to three or more categories. Given the increasing use of polytomous items in assessment practices, item response theory (IRT) models specialized for polytomous items are becoming increasingly common. The purpose of this ITEMS module is to provide an accessible overview of…
Descriptors: Item Response Theory, Test Items, Models, Equations (Mathematics)
Gierl, Mark J.; Lai, Hollis – Educational Measurement: Issues and Practice, 2013
Changes to the design and development of our educational assessments are resulting in the unprecedented demand for a large and continuous supply of content-specific test items. One way to address this growing demand is with automatic item generation (AIG). AIG is the process of using item models to generate test items with the aid of computer…
Descriptors: Educational Assessment, Test Items, Automation, Computer Assisted Testing
Gierl, Mark J.; Lai, Hollis – Educational Measurement: Issues and Practice, 2016
Testing organization needs large numbers of high-quality items due to the proliferation of alternative test administration methods and modern test designs. But the current demand for items far exceeds the supply. Test items, as they are currently written, evoke a process that is both time-consuming and expensive because each item is written,…
Descriptors: Test Items, Test Construction, Psychometrics, Models
Gierl, Mark J. – Educational Measurement: Issues and Practice, 2005
In this paper I describe and illustrate the Roussos-Stout (1996) multidimensionality-based DIF analysis paradigm, with emphasis on its implication for the selection of a matching and studied subtest for DIF analyses. Standard DIF practice encourages an exploratory search for matching subtest items based on purely statistical criteria, such as a…
Descriptors: Models, Test Items, Test Bias, Statistical Analysis

Wilson, Sandra Meachan; Hiscox, Michael D. – Educational Measurement: Issues and Practice, 1984
This article presents a model that can be used by local school districts for reanalyzing standardized test results to obtain a more valid assessment of local learning objectives can be used to identify strengths/weaknesses of existing programs as well as individual students. (EGS)
Descriptors: Educational Objectives, Item Analysis, Models, School Districts