Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 8 |
Since 2006 (last 20 years) | 9 |
Descriptor
Source
Journal of Educational… | 10 |
Author
Wilson, Mark | 3 |
Amanda Goodwin | 1 |
Choi, In-Hee | 1 |
Feuerstahler, Leah | 1 |
Gochyyev, Perman | 1 |
Hanna, Gila | 1 |
Lee, Soo | 1 |
Liu, Chen-Wei | 1 |
Matthew Naveiras | 1 |
Miguel A. Sorrel | 1 |
Paul De Boeck | 1 |
More ▼ |
Publication Type
Journal Articles | 10 |
Reports - Research | 10 |
Education Level
Middle Schools | 2 |
Junior High Schools | 1 |
Secondary Education | 1 |
Audience
Researchers | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Wenchao Ma; Miguel A. Sorrel; Xiaoming Zhai; Yuan Ge – Journal of Educational Measurement, 2024
Most existing diagnostic models are developed to detect whether students have mastered a set of skills of interest, but few have focused on identifying what scientific misconceptions students possess. This article developed a general dual-purpose model for simultaneously estimating students' overall ability and the presence and absence of…
Descriptors: Models, Misconceptions, Diagnostic Tests, Ability
Sun-Joo Cho; Amanda Goodwin; Matthew Naveiras; Paul De Boeck – Journal of Educational Measurement, 2024
Explanatory item response models (EIRMs) have been applied to investigate the effects of person covariates, item covariates, and their interactions in the fields of reading education and psycholinguistics. In practice, it is often assumed that the relationships between the covariates and the logit transformation of item response probability are…
Descriptors: Item Response Theory, Test Items, Models, Maximum Likelihood Statistics
Wind, Stefanie A.; Sebok-Syer, Stefanie S. – Journal of Educational Measurement, 2019
When practitioners use modern measurement models to evaluate rating quality, they commonly examine rater fit statistics that summarize how well each rater's ratings fit the expectations of the measurement model. Essentially, this approach involves examining the unexpected ratings that each misfitting rater assigned (i.e., carrying out analyses of…
Descriptors: Measurement, Models, Evaluators, Simulation
Feuerstahler, Leah; Wilson, Mark – Journal of Educational Measurement, 2019
Scores estimated from multidimensional item response theory (IRT) models are not necessarily comparable across dimensions. In this article, the concept of aligned dimensions is formalized in the context of Rasch models, and two methods are described--delta dimensional alignment (DDA) and logistic regression alignment (LRA)--to transform estimated…
Descriptors: Item Response Theory, Models, Scores, Comparative Analysis
Liu, Chen-Wei; Wang, Wen-Chung – Journal of Educational Measurement, 2017
The examinee-selected-item (ESI) design, in which examinees are required to respond to a fixed number of items in a given set of items (e.g., choose one item to respond from a pair of items), always yields incomplete data (i.e., only the selected items are answered and the others have missing data) that are likely nonignorable. Therefore, using…
Descriptors: Item Response Theory, Models, Maximum Likelihood Statistics, Data Analysis
Lee, Soo; Suh, Youngsuk – Journal of Educational Measurement, 2018
Lord's Wald test for differential item functioning (DIF) has not been studied extensively in the context of the multidimensional item response theory (MIRT) framework. In this article, Lord's Wald test was implemented using two estimation approaches, marginal maximum likelihood estimation and Bayesian Markov chain Monte Carlo estimation, to detect…
Descriptors: Item Response Theory, Sample Size, Models, Error of Measurement
Shin, Hyo Jeong; Wilson, Mark; Choi, In-Hee – Journal of Educational Measurement, 2017
This study proposes a structured constructs model (SCM) to examine measurement in the context of a multidimensional learning progression (LP). The LP is assumed to have features that go beyond a typical multidimentional IRT model, in that there are hypothesized to be certain cross-dimensional linkages that correspond to requirements between the…
Descriptors: Middle School Students, Student Evaluation, Measurement Techniques, Learning Processes
Wilson, Mark; Gochyyev, Perman; Scalise, Kathleen – Journal of Educational Measurement, 2017
This article summarizes assessment of cognitive skills through collaborative tasks, using field test results from the Assessment and Teaching of 21st Century Skills (ATC21S) project. This project, sponsored by Cisco, Intel, and Microsoft, aims to help educators around the world enable students with the skills to succeed in future career and…
Descriptors: Cognitive Ability, Thinking Skills, Evaluation Methods, Educational Assessment
Petridou, Alexandra; Williams, Julian – Journal of Educational Measurement, 2007
Hypotheses about aberrant test-response behavior and hence invalid person-measurement have hitherto included factors like ability, gender, language, test-anxiety, and motivation, but these have not previously been collectively investigated with real data, or with multilevel models. This study analyzes the effect of these factors on person…
Descriptors: Data Analysis, Models, Students, English (Second Language)

Hanna, Gila – Journal of Educational Measurement, 1984
The validity of a comparison of mean test scores for two groups and of a longitudinal comparison of means within each group is assessed. Using LISREL, factor analyses are used to test the hypotheses of similar factor patterns, equal units of measurement, and equal measurement accuracy between groups and across time. (Author/DWH)
Descriptors: Achievement Tests, Comparative Analysis, Data Analysis, Factor Analysis