Publication Date
In 2025 | 0 |
Since 2024 | 4 |
Since 2021 (last 5 years) | 11 |
Descriptor
Item Response Theory | 7 |
Test Items | 6 |
Evaluation Methods | 5 |
Error of Measurement | 3 |
Item Analysis | 3 |
Mathematics Tests | 3 |
Monte Carlo Methods | 3 |
Difficulty Level | 2 |
Identification | 2 |
Language Arts | 2 |
Methods | 2 |
More ▼ |
Source
Applied Measurement in… | 11 |
Author
Bauer, Malcolm I. | 1 |
Benjamin Lugu | 1 |
Brian F. French | 1 |
Chalmers, R. Philip | 1 |
Evans, Carla | 1 |
Grabovsky, Irina | 1 |
Han, Yuting | 1 |
James S. Kim | 1 |
Joshua B. Gilbert | 1 |
Jurich, Daniel | 1 |
Kuhfeld, Megan | 1 |
More ▼ |
Publication Type
Journal Articles | 11 |
Reports - Research | 10 |
Reports - Descriptive | 1 |
Education Level
Secondary Education | 3 |
Elementary Education | 2 |
Junior High Schools | 2 |
Middle Schools | 2 |
Early Childhood Education | 1 |
Elementary Secondary Education | 1 |
Grade 1 | 1 |
Grade 11 | 1 |
Grade 2 | 1 |
Grade 3 | 1 |
Grade 8 | 1 |
More ▼ |
Audience
Location
New Hampshire | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Measures of Academic Progress | 1 |
What Works Clearinghouse Rating
Stefanie A. Wind; Benjamin Lugu – Applied Measurement in Education, 2024
Researchers who use measurement models for evaluation purposes often select models with stringent requirements, such as Rasch models, which are parametric. Mokken Scale Analysis (MSA) offers a theory-driven nonparametric modeling approach that may be more appropriate for some measurement applications. Researchers have discussed using MSA as a…
Descriptors: Item Response Theory, Data Analysis, Simulation, Nonparametric Statistics
Wise, Steven; Kuhfeld, Megan – Applied Measurement in Education, 2021
Effort-moderated (E-M) scoring is intended to estimate how well a disengaged test taker would have performed had they been fully engaged. It accomplishes this adjustment by excluding disengaged responses from scoring and estimating performance from the remaining responses. The scoring method, however, assumes that the remaining responses are not…
Descriptors: Scoring, Achievement Tests, Identification, Validity
Chalmers, R. Philip; Zheng, Guoguo – Applied Measurement in Education, 2023
This article presents generalizations of SIBTEST and crossing-SIBTEST statistics for differential item functioning (DIF) investigations involving more than two groups. After reviewing the original two-group setup for these statistics, a set of multigroup generalizations that support contrast matrices for joint tests of DIF are presented. To…
Descriptors: Test Bias, Test Items, Item Response Theory, Error of Measurement
O'Dwyer, Eowyn P.; Sparks, Jesse R.; Nabors Oláh, Leslie – Applied Measurement in Education, 2023
A critical aspect of the development of culturally relevant classroom assessments is the design of tasks that affirm students' racial and ethnic identities and community cultural practices. This paper describes the process we followed to build a shared understanding of what culturally relevant assessments are, to pursue ways of bringing more…
Descriptors: Evaluation Methods, Culturally Relevant Education, Test Construction, Educational Research
Shaojie Wang; Won-Chan Lee; Minqiang Zhang; Lixin Yuan – Applied Measurement in Education, 2024
To reduce the impact of parameter estimation errors on IRT linking results, recent work introduced two information-weighted characteristic curve methods for dichotomous items. These two methods showed outstanding performance in both simulation and pseudo-form pseudo-group analysis. The current study expands upon the concept of information…
Descriptors: Item Response Theory, Test Format, Test Length, Error of Measurement
Han, Yuting; Wilson, Mark – Applied Measurement in Education, 2022
A technology-based problem-solving test can automatically capture all the actions of students when they complete tasks and save them as process data. Response sequences are the external manifestations of the latent intellectual activities of the students, and it contains rich information about students' abilities and different problem-solving…
Descriptors: Technology Uses in Education, Problem Solving, 21st Century Skills, Evaluation Methods
Traditional vs Intersectional DIF Analysis: Considerations and a Comparison Using State Testing Data
Tony Albano; Brian F. French; Thao Thu Vo – Applied Measurement in Education, 2024
Recent research has demonstrated an intersectional approach to the study of differential item functioning (DIF). This approach expands DIF to account for the interactions between what have traditionally been treated as separate grouping variables. In this paper, we compare traditional and intersectional DIF analyses using data from a state testing…
Descriptors: Test Items, Item Analysis, Data Use, Standardized Tests
Perez, Alexandra Lane; Evans, Carla – Applied Measurement in Education, 2023
New Hampshire's Performance Assessment of Competency Education (PACE) innovative assessment system uses student scores from classroom performance assessments as well as other classroom tests for school accountability purposes. One concern is that not having annual state testing may incentivize schools and teachers away from teaching the breadth of…
Descriptors: Grade 8, Competency Based Education, Evaluation Methods, Educational Innovation
Liu, Chunyan; Jurich, Daniel; Morrison, Carol; Grabovsky, Irina – Applied Measurement in Education, 2021
The existence of outliers in the anchor items can be detrimental to the estimation of examinee ability and undermine the validity of score interpretation across forms. However, in practice, anchor item performance can become distorted due to various reasons. This study compares the performance of modified "INFIT" and "OUTFIT"…
Descriptors: Equated Scores, Test Items, Item Response Theory, Difficulty Level
Joshua B. Gilbert; James S. Kim; Luke W. Miratrix – Applied Measurement in Education, 2024
Longitudinal models typically emphasize between-person predictors of change but ignore how growth varies "within" persons because each person contributes only one data point at each time. In contrast, modeling growth with multi-item assessments allows evaluation of how relative item performance may shift over time. While traditionally…
Descriptors: Vocabulary Development, Item Response Theory, Test Items, Student Development
Pham, Duy N.; Wells, Craig S.; Bauer, Malcolm I.; Wylie, E. Caroline; Monroe, Scott – Applied Measurement in Education, 2021
Assessments built on a theory of learning progressions are promising formative tools to support learning and teaching. The quality and usefulness of those assessments depend, in large part, on the validity of the theory-informed inferences about student learning made from the assessment results. In this study, we introduced an approach to address…
Descriptors: Formative Evaluation, Mathematics Instruction, Mathematics Achievement, Middle School Students