Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 4 |
Descriptor
Nonparametric Statistics | 6 |
Test Items | 6 |
Item Response Theory | 4 |
Scores | 3 |
Simulation | 3 |
Achievement Tests | 2 |
Classification | 2 |
Difficulty Level | 2 |
Guessing (Tests) | 2 |
Identification | 2 |
International Assessment | 2 |
More ▼ |
Source
Applied Measurement in… | 6 |
Author
Meijer, Rob R. | 2 |
Rios, Joseph A. | 2 |
Abulela, Mohammed A. A. | 1 |
Bolt, Daniel M. | 1 |
Bova, Joe | 1 |
Ferris, Heather | 1 |
Guo, Hongwen | 1 |
Haberman, Shelby | 1 |
Lee, HyeSun | 1 |
Liu, Ou Lydia | 1 |
Martinez, Angel | 1 |
More ▼ |
Publication Type
Journal Articles | 6 |
Reports - Research | 4 |
Reports - Evaluative | 2 |
Education Level
Secondary Education | 2 |
High Schools | 1 |
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Location
California | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 2 |
What Works Clearinghouse Rating
Lee, HyeSun; Smith, Weldon; Martinez, Angel; Ferris, Heather; Bova, Joe – Applied Measurement in Education, 2021
The aim of the current research was to provide recommendations to facilitate the development and use of anchoring vignettes (AVs) for cross-cultural comparisons in education. Study 1 identified six factors leading to order violations and ties in AV responses based on cognitive interviews with 15-year-old students. The factors were categorized into…
Descriptors: Vignettes, Test Items, Equated Scores, Nonparametric Statistics
Abulela, Mohammed A. A.; Rios, Joseph A. – Applied Measurement in Education, 2022
When there are no personal consequences associated with test performance for examinees, rapid guessing (RG) is a concern and can differ between subgroups. To date, the impact of differential RG on item-level measurement invariance has received minimal attention. To that end, a simulation study was conducted to examine the robustness of the…
Descriptors: Comparative Analysis, Robustness (Statistics), Nonparametric Statistics, Item Analysis
Guo, Hongwen; Rios, Joseph A.; Haberman, Shelby; Liu, Ou Lydia; Wang, Jing; Paek, Insu – Applied Measurement in Education, 2016
Unmotivated test takers using rapid guessing in item responses can affect validity studies and teacher and institution performance evaluation negatively, making it critical to identify these test takers. The authors propose a new nonparametric method for finding response-time thresholds for flagging item responses that result from rapid-guessing…
Descriptors: Guessing (Tests), Reaction Time, Nonparametric Statistics, Models
Wells, Craig S.; Bolt, Daniel M. – Applied Measurement in Education, 2008
Tests of model misfit are often performed to validate the use of a particular model in item response theory. Douglas and Cohen (2001) introduced a general nonparametric approach for detecting misfit under the two-parameter logistic model. However, the statistical properties of their approach, and empirical comparisons to other methods, have not…
Descriptors: Test Length, Test Items, Monte Carlo Methods, Nonparametric Statistics

Meijer, Rob R.; Sijtsma, Klaas – Applied Measurement in Education, 1995
Methods for detecting item score patterns that are unlikely, given that a parametric item response theory model gives an adequate description of the data or given the responses of other persons in the group, are discussed. The use of person-fit statistics in empirical data analysis is briefly discussed. (SLD)
Descriptors: Identification, Item Response Theory, Nonparametric Statistics, Patterns in Mathematics

Meijer, Rob R.; And Others – Applied Measurement in Education, 1996
Several existing group-based statistics to detect improbable item score patterns are discussed, along with the cut scores proposed in the literature to classify an item score pattern as aberrant. A simulation study and an empirical study are used to compare the statistics and their use and to investigate the practical use of cut scores. (SLD)
Descriptors: Achievement Tests, Classification, Cutting Scores, Identification