NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 26 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Hung, Su-Pin; Huang, Hung-Yu – Journal of Educational and Behavioral Statistics, 2022
To address response style or bias in rating scales, forced-choice items are often used to request that respondents rank their attitudes or preferences among a limited set of options. The rating scales used by raters to render judgments on ratees' performance also contribute to rater bias or errors; consequently, forced-choice items have recently…
Descriptors: Evaluation Methods, Rating Scales, Item Analysis, Preferences
Peer reviewed Peer reviewed
Direct linkDirect link
Smith, Trevor I.; Bendjilali, Nasrine – Physical Review Physics Education Research, 2022
Several recent studies have employed item response theory (IRT) to rank incorrect responses to commonly used research-based multiple-choice assessments. These studies use Bock's nominal response model (NRM) for applying IRT to categorical (nondichotomous) data, but the response rankings only utilize half of the parameters estimated by the model.…
Descriptors: Item Response Theory, Test Items, Multiple Choice Tests, Science Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Lúcio, Patrícia Silva; Vandekerckhove, Joachim; Polanczyk, Guilherme V.; Cogo-Moreira, Hugo – Journal of Psychoeducational Assessment, 2021
The present study compares the fit of two- and three-parameter logistic (2PL and 3PL) models of item response theory in the performance of preschool children on the Raven's Colored Progressive Matrices. The test of Raven is widely used for evaluating nonverbal intelligence of factor g. Studies comparing models with real data are scarce on the…
Descriptors: Guessing (Tests), Item Response Theory, Test Validity, Preschool Children
Peer reviewed Peer reviewed
Direct linkDirect link
Pan, Tianshu; Yin, Yue – Applied Measurement in Education, 2017
In this article, we propose using the Bayes factors (BF) to evaluate person fit in item response theory models under the framework of Bayesian evaluation of an informative diagnostic hypothesis. We first discuss the theoretical foundation for this application and how to analyze person fit using BF. To demonstrate the feasibility of this approach,…
Descriptors: Bayesian Statistics, Goodness of Fit, Item Response Theory, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Chao; Lu, Hong – Educational Technology & Society, 2018
This study focused on the effect of examinees' ability levels on the relationship between Reflective-Impulsive (RI) cognitive style and item response time in computerized adaptive testing (CAT). The total of 56 students majoring in Educational Technology from Shandong Normal University participated in this study, and their RI cognitive styles were…
Descriptors: Item Response Theory, Computer Assisted Testing, Cognitive Style, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Ren; Huggins-Manley, Anne Corinne; Bulut, Okan – Educational and Psychological Measurement, 2018
Developing a diagnostic tool within the diagnostic measurement framework is the optimal approach to obtain multidimensional and classification-based feedback on examinees. However, end users may seek to obtain diagnostic feedback from existing item responses to assessments that have been designed under either the classical test theory or item…
Descriptors: Models, Item Response Theory, Psychometrics, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
DeMars, Christine E. – Educational and Psychological Measurement, 2016
Partially compensatory models may capture the cognitive skills needed to answer test items more realistically than compensatory models, but estimating the model parameters may be a challenge. Data were simulated to follow two different partially compensatory models, a model with an interaction term and a product model. The model parameters were…
Descriptors: Item Response Theory, Models, Thinking Skills, Test Items
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Çokluk, Ömay; Gül, Emrah; Dogan-Gül, Çilem – Educational Sciences: Theory and Practice, 2016
The study aims to examine whether differential item function is displayed in three different test forms that have item orders of random and sequential versions (easy-to-hard and hard-to-easy), based on Classical Test Theory (CTT) and Item Response Theory (IRT) methods and bearing item difficulty levels in mind. In the correlational research, the…
Descriptors: Test Bias, Test Items, Difficulty Level, Test Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Nydick, Steven W. – Journal of Educational and Behavioral Statistics, 2014
The sequential probability ratio test (SPRT) is a common method for terminating item response theory (IRT)-based adaptive classification tests. To decide whether a classification test should stop, the SPRT compares a simple log-likelihood ratio, based on the classification bound separating two categories, to prespecified critical values. As has…
Descriptors: Probability, Item Response Theory, Models, Classification
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ferrando, Pere J. – Psicologica: International Journal of Methodology and Experimental Psychology, 2015
Test-retest studies for assessing stability and change are widely used in different domains and allow improved or additional individual estimates of interest to be obtained. However, if these estimates are to be validly interpreted the responses given at Time-2 must be free of retest effects, and the fulfilment of this assumption must be…
Descriptors: Item Response Theory, Evaluation Methods, Responses, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Timmons, Kristy; Pelletier, Janette – Early Child Development and Care, 2016
In this study, we explored the influence of kindergarten children's perspectives of school on their literacy and self-regulation outcomes. Children's early perspectives were captured in a three-question, finger-puppet interview. Responses to the interview questions were coded thematically as being academic and/or social in nature, and were…
Descriptors: Childhood Attitudes, Kindergarten, Longitudinal Studies, Puppetry
Bulut, Okan – ProQuest LLC, 2013
The importance of subscores in educational and psychological assessments is undeniable. Subscores yield diagnostic information that can be used for determining how each examinee's abilities/skills vary over different content domains. One of the most common criticisms about reporting and using subscores is insufficient reliability of subscores.…
Descriptors: Item Response Theory, Simulation, Correlation, Reliability
Gao, Song – ProQuest LLC, 2011
This study explored the relationship between successful guessing and latent ability in IRT models. A new IRT model was developed with a guessing function integrating probability of guessing an item correctly with the examinee's ability and the item parameters. The conventional 3PL IRT model was compared with the new 2PL-Guessing model on…
Descriptors: Correlation, Guessing (Tests), Item Response Theory, Models
Goldin, Ilya M.; Koedinger, Kenneth R.; Aleven, Vincent – International Educational Data Mining Society, 2012
Although ITSs are supposed to adapt to differences among learners, so far, little attention has been paid to how they might adapt to differences in how students learn from help. When students study with an Intelligent Tutoring System, they may receive multiple types of help, but may not comprehend and make use of this help in the same way. To…
Descriptors: Performance Factors, Intelligent Tutoring Systems, Individual Differences, Prediction
Peer reviewed Peer reviewed
Direct linkDirect link
Agus, Mirian; Penna, Maria Pietronilla; Peró-Cebollero, Maribel; Guàrdia-Olmos, Joan – EURASIA Journal of Mathematics, Science & Technology Education, 2016
Research on the graphical facilitation of probabilistic reasoning has been characterised by the effort expended to identify valid assessment tools. The authors developed an assessment instrument to compare reasoning performances when problems were presented in verbal-numerical and graphical-pictorial formats. A sample of undergraduate psychology…
Descriptors: Probability, Abstract Reasoning, Thinking Skills, Educational Assessment
Previous Page | Next Page »
Pages: 1  |  2