Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 2 |
| Since 2017 (last 10 years) | 8 |
| Since 2007 (last 20 years) | 9 |
Descriptor
Source
| Educational Measurement:… | 10 |
Author
| Levy, Roy | 2 |
| An, Chen | 1 |
| Baldwin, Peter | 1 |
| Bradshaw, Laine | 1 |
| Braun, Henry | 1 |
| Clauser, Brian E. | 1 |
| Daniel M Bolt | 1 |
| Domingue, Benjamin W. | 1 |
| Huynh, Huynh | 1 |
| Kanopka, Klint | 1 |
| Kapoor, Radhika | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 10 |
| Reports - Research | 6 |
| Reports - Descriptive | 4 |
Education Level
| Middle Schools | 1 |
| Secondary Education | 1 |
Audience
| Researchers | 1 |
| Students | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
| Program for International… | 1 |
What Works Clearinghouse Rating
Xiangyi Liao; Daniel M Bolt – Educational Measurement: Issues and Practice, 2024
Traditional approaches to the modeling of multiple-choice item response data (e.g., 3PL, 4PL models) emphasize slips and guesses as random events. In this paper, an item response model is presented that characterizes both disjunctively interacting guessing and conjunctively interacting slipping processes as proficiency-related phenomena. We show…
Descriptors: Item Response Theory, Test Items, Error Correction, Guessing (Tests)
Zhan, Peida – Educational Measurement: Issues and Practice, 2021
Refined tracking allows students and teachers to more accurately understand students' learning growth. To provide refined learning tracking with longitudinal diagnostic assessment, this article proposed a new model by incorporating probabilistic logic into longitudinal diagnostic modeling. Specifically, probabilistic attributes were used instead…
Descriptors: Educational Diagnosis, Learning Processes, Models, Student Evaluation
Ulitzsch, Esther; Domingue, Benjamin W.; Kapoor, Radhika; Kanopka, Klint; Rios, Joseph A. – Educational Measurement: Issues and Practice, 2023
Common response-time-based approaches for non-effortful response behavior (NRB) in educational achievement tests filter responses that are associated with response times below some threshold. These approaches are, however, limited in that they require a binary decision on whether a response is classified as stemming from NRB; thus ignoring…
Descriptors: Reaction Time, Responses, Behavior, Achievement Tests
Baldwin, Peter; Margolis, Melissa J.; Clauser, Brian E.; Mee, Janet; Winward, Marcia – Educational Measurement: Issues and Practice, 2020
Evidence of the internal consistency of standard-setting judgments is a critical part of the validity argument for tests used to make classification decisions. The bookmark standard-setting procedure is a popular approach to establishing performance standards, but there is relatively little research that reflects on the internal consistency of the…
Descriptors: Standard Setting (Scoring), Probability, Cutting Scores, Evaluation Methods
Levy, Roy – Educational Measurement: Issues and Practice, 2020
In this digital ITEMS module, Dr. Roy Levy describes Bayesian approaches to psychometric modeling. He discusses how Bayesian inference is a mechanism for reasoning in a probability-modeling framework and is well-suited to core problems in educational measurement: reasoning from student performances on an assessment to make inferences about their…
Descriptors: Bayesian Statistics, Psychometrics, Item Response Theory, Statistical Inference
Bradshaw, Laine; Levy, Roy – Educational Measurement: Issues and Practice, 2019
Although much research has been conducted on the psychometric properties of cognitive diagnostic models, they are only recently being used in operational settings to provide results to examinees and other stakeholders. Using this newer class of models in practice comes with a fresh challenge for diagnostic assessment developers: effectively…
Descriptors: Data Interpretation, Probability, Classification, Diagnostic Tests
Wind, Stefanie A. – Educational Measurement: Issues and Practice, 2017
Mokken scale analysis (MSA) is a probabilistic-nonparametric approach to item response theory (IRT) that can be used to evaluate fundamental measurement properties with less strict assumptions than parametric IRT models. This instructional module provides an introduction to MSA as a probabilistic-nonparametric framework in which to explore…
Descriptors: Probability, Nonparametric Statistics, Item Response Theory, Scaling
Wyse, Adam E. – Educational Measurement: Issues and Practice, 2015
This article uses data from a large-scale assessment program to illustrate the potential issue of range restriction with the Bookmark method in the context of trying to set cut scores to closely align with a set of college and career readiness benchmarks. Analyses indicated that range restriction issues existed across different response…
Descriptors: Cutting Scores, Alignment (Education), College Readiness, Career Readiness
An, Chen; Braun, Henry; Walsh, Mary E. – Educational Measurement: Issues and Practice, 2018
Making causal inferences from a quasi-experiment is difficult. Sensitivity analysis approaches to address hidden selection bias thus have gained popularity. This study serves as an introduction to a simple but practical form of sensitivity analysis using Monte Carlo simulation procedures. We examine estimated treatment effects for a school-based…
Descriptors: Statistical Inference, Intervention, Program Effectiveness, Quasiexperimental Design
Huynh, Huynh – Educational Measurement: Issues and Practice, 2006
By analyzing the Fisher information allotted to the correct response of a Rasch binary item, Huynh (1994) established the response probability criterion 0.67 (RP67) for standard settings based on bookmarks and item mapping. The purpose of this note is to help clarify the conceptual and psychometric framework of the RP criterion.
Descriptors: Probability, Standard Setting, Item Response Theory, Psychometrics

Peer reviewed
Direct link
