NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Researchers1
Location
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 52 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Bronson Hui; Zhiyi Wu – Studies in Second Language Acquisition, 2024
A slowdown or a speedup in response times across experimental conditions can be taken as evidence of online deployment of knowledge. However, response-time difference measures are rarely evaluated on their reliability, and there is no standard practice to estimate it. In this article, we used three open data sets to explore an approach to…
Descriptors: Reliability, Reaction Time, Psychometrics, Criticism
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Aiman Mohammad Freihat; Omar Saleh Bani Yassin – Educational Process: International Journal, 2025
Background/purpose: This study aimed to reveal the accuracy of estimation of multiple-choice test items parameters following the models of the item-response theory in measurement. Materials/methods: The researchers depended on the measurement accuracy indicators, which express the absolute difference between the estimated and actual values of the…
Descriptors: Accuracy, Computation, Multiple Choice Tests, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Boris Forthmann; Benjamin Goecke; Roger E. Beaty – Creativity Research Journal, 2025
Human ratings are ubiquitous in creativity research. Yet, the process of rating responses to creativity tasks -- typically several hundred or thousands of responses, per rater -- is often time-consuming and expensive. Planned missing data designs, where raters only rate a subset of the total number of responses, have been recently proposed as one…
Descriptors: Creativity, Research, Researchers, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Fujimoto, Ken A. – Educational and Psychological Measurement, 2019
Advancements in item response theory (IRT) have led to models for dual dependence, which control for cluster and method effects during a psychometric analysis. Currently, however, this class of models does not include one that controls for when the method effects stem from two method sources in which one source functions differently across the…
Descriptors: Bayesian Statistics, Item Response Theory, Psychometrics, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Chengyu Cui; Chun Wang; Gongjun Xu – Grantee Submission, 2024
Multidimensional item response theory (MIRT) models have generated increasing interest in the psychometrics literature. Efficient approaches for estimating MIRT models with dichotomous responses have been developed, but constructing an equally efficient and robust algorithm for polytomous models has received limited attention. To address this gap,…
Descriptors: Item Response Theory, Accuracy, Simulation, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Madison, Matthew J. – Educational Measurement: Issues and Practice, 2019
Recent advances have enabled diagnostic classification models (DCMs) to accommodate longitudinal data. These longitudinal DCMs were developed to study how examinees change, or transition, between different attribute mastery statuses over time. This study examines using longitudinal DCMs as an approach to assessing growth and serves three purposes:…
Descriptors: Longitudinal Studies, Item Response Theory, Psychometrics, Criterion Referenced Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Bradshaw, Laine P.; Madison, Matthew J. – International Journal of Testing, 2016
In item response theory (IRT), the invariance property states that item parameter estimates are independent of the examinee sample, and examinee ability estimates are independent of the test items. While this property has long been established and understood by the measurement community for IRT models, the same cannot be said for diagnostic…
Descriptors: Classification, Models, Simulation, Psychometrics
Leventhal, Brian – ProQuest LLC, 2017
More robust and rigorous psychometric models, such as multidimensional Item Response Theory models, have been advocated for survey applications. However, item responses may be influenced by construct-irrelevant variance factors such as preferences for extreme response options. Through empirical and simulation methods, this study evaluates the use…
Descriptors: Psychometrics, Item Response Theory, Simulation, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Feinberg, Richard A.; Rubright, Jonathan D. – Educational Measurement: Issues and Practice, 2016
Simulation studies are fundamental to psychometric discourse and play a crucial role in operational and academic research. Yet, resources for psychometricians interested in conducting simulations are scarce. This Instructional Topics in Educational Measurement Series (ITEMS) module is meant to address this deficiency by providing a comprehensive…
Descriptors: Simulation, Psychometrics, Vocabulary, Research Design
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bergner, Yoav; Andrews, Jessica J.; Zhu, Mengxiao; Gonzales, Joseph E. – ETS Research Report Series, 2016
Collaborative problem solving (CPS) is a critical competency in a variety of contexts, including the workplace, school, and home. However, only recently have assessment and curriculum reformers begun to focus to a greater extent on the acquisition and development of CPS skill. One of the major challenges in psychometric modeling of CPS is…
Descriptors: Problem Solving, Cooperative Learning, Evaluation Methods, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Magnus, Brooke E.; Thissen, David – Journal of Educational and Behavioral Statistics, 2017
Questionnaires that include items eliciting count responses are becoming increasingly common in psychology. This study proposes methodological techniques to overcome some of the challenges associated with analyzing multivariate item response data that exhibit zero inflation, maximum inflation, and heaping at preferred digits. The modeling…
Descriptors: Item Response Theory, Models, Multivariate Analysis, Questionnaires
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Martin-Fernandez, Manuel; Revuelta, Javier – Psicologica: International Journal of Methodology and Experimental Psychology, 2017
This study compares the performance of two estimation algorithms of new usage, the Metropolis-Hastings Robins-Monro (MHRM) and the Hamiltonian MCMC (HMC), with two consolidated algorithms in the psychometric literature, the marginal likelihood via EM algorithm (MML-EM) and the Markov chain Monte Carlo (MCMC), in the estimation of multidimensional…
Descriptors: Bayesian Statistics, Item Response Theory, Models, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Levy, Roy – Educational Assessment, 2013
This article characterizes the advances, opportunities, and challenges for psychometrics of simulation-based assessments through a lens that views assessment as evidentiary reasoning. Simulation-based tasks offer the prospect for student experiences that differ from traditional assessment. Such tasks may be used to support evidentiary arguments…
Descriptors: Simulation, Student Evaluation, Psychometrics, Evidence
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Falk, Carl F.; Cai, Li – Grantee Submission, 2014
We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest…
Descriptors: Maximum Likelihood Statistics, Item Response Theory, Computation, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Almond, Russell G.; Kim, Yoon Jeon; Velasquez, Gertrudes; Shute, Valerie J. – Measurement: Interdisciplinary Research and Perspectives, 2014
One of the key ideas of evidence-centered assessment design (ECD) is that task features can be deliberately manipulated to change the psychometric properties of items. ECD identifies a number of roles that task-feature variables can play, including determining the focus of evidence, guiding form creation, determining item difficulty and…
Descriptors: Educational Games, Simulation, Psychometrics, Educational Assessment
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4