NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)5
Since 2006 (last 20 years)15
Audience
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing all 15 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Torre, Jimmy de la; Akbay, Lokman – Eurasian Journal of Educational Research, 2019
Purpose: Well-designed assessment methodologies and various cognitive diagnosis models (CDMs) to extract diagnostic information about examinees' individual strengths and weaknesses have been developed. Due to this novelty, as well as educational specialists' lack of familiarity with CDMs, their applications are not widespread. This article aims at…
Descriptors: Cognitive Measurement, Models, Computer Software, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Ames, Allison; Myers, Aaron – Educational Measurement: Issues and Practice, 2019
Drawing valid inferences from modern measurement models is contingent upon a good fit of the data to the model. Violations of model-data fit have numerous consequences, limiting the usefulness and applicability of the model. As Bayesian estimation is becoming more common, understanding the Bayesian approaches for evaluating model-data fit models…
Descriptors: Bayesian Statistics, Psychometrics, Models, Predictive Measurement
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Falk, Carl F.; Cai, Li – Grantee Submission, 2014
We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest…
Descriptors: Maximum Likelihood Statistics, Item Response Theory, Computation, Simulation
Schoen, Robert C.; LaVenia, Mark; Bauduin, Charity; Farina, Kristy – Grantee Submission, 2016
The subject of this report is a pair of written, group-administered tests designed to measure the performance of grade 1 and grade 2 students at the beginning of the school year in the domain of number and operations. These tests build on previous versions field-tested in fall 2013 (Schoen, LaVenia, Bauduin, & Farina, 2016). Because the tests…
Descriptors: Elementary School Mathematics, Grade 1, Grade 2, Mathematics Tests
Schoen, Robert C.; LaVenia, Mark; Champagne, Zachary M.; Farina, Kristy; Tazaz, Amanda M. – Grantee Submission, 2017
The following report describes an assessment instrument called the Mathematics Performance and Cognition (MPAC) interview. The MPAC interview was designed to measure two outcomes of interest. It was designed to measure first and second graders' mathematics achievement in number, operations, and equality, and it was also designed to gather…
Descriptors: Interviews, Test Construction, Psychometrics, Elementary School Mathematics
Schoen, Robert C.; LaVenia, Mark; Champagne, Zachary M.; Farina, Kristy – Grantee Submission, 2017
This report provides an overview of the development, implementation, and psychometric properties of a student mathematics interview designed to assess first- and second-grade student achievement and thinking processes. The student interview was conducted with 622 first- or second-grade students in 22 schools located in two public school districts…
Descriptors: Interviews, Test Construction, Psychometrics, Elementary School Mathematics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ferrando, Pere J.; Pallero, Rafael; Anguiano-Carrasco, Cristina – Psicologica: International Journal of Methodology and Experimental Psychology, 2013
The present study has two main interests. First, some pending issues about the psychometric properties of the CTAC (an anxiety questionnaire for blind and visually-impaired people) are assessed using item response theory (IRT). Second, the linear model is compared to the graded response model (GRM) in terms of measurement precision, sensitivity…
Descriptors: Anxiety, Visual Impairments, Comparative Analysis, Questionnaires
Peer reviewed Peer reviewed
Direct linkDirect link
Bagley, Anita M.; Gorton, George E.; Bjornson, Kristie; Bevans, Katherine; Stout, Jean L.; Narayanan, Unni; Tucker, Carole A. – Developmental Medicine & Child Neurology, 2011
Aim: Children and adolescents highly value their ability to participate in relevant daily life and recreational activities. The Activities Scale for Kids-performance (ASKp) instrument measures the frequency of performance of 30 common childhood activities, and has been shown to be valid and reliable. A revised and expanded 38-item ASKp (ASKp38)…
Descriptors: Recreational Activities, Play, Physical Disabilities, Cerebral Palsy
Peer reviewed Peer reviewed
Direct linkDirect link
Bauer, Daniel J. – Psychometrika, 2009
When using linear models for cluster-correlated or longitudinal data, a common modeling practice is to begin by fitting a relatively simple model and then to increase the model complexity in steps. New predictors might be added to the model, or a more complex covariance structure might be specified for the observations. When fitting models for…
Descriptors: Goodness of Fit, Computation, Models, Predictor Variables
Peer reviewed Peer reviewed
Direct linkDirect link
Ferrando, Pere J.; Lorenzo-Seva, Urbano – Applied Psychological Measurement, 2007
This article describes a general item response theory model for personality items that allows the information provided by the item response times to be used to estimate the individual trait levels. The submodel describing the item response times is a modification of Thissen's log-linear model and is based on the distance-difficulty hypothesis in…
Descriptors: Reaction Time, Personality Assessment, Goodness of Fit, Grants
Peer reviewed Peer reviewed
Direct linkDirect link
Ferrando, Pere J.; Lorenzo-Seva, Urbano – Multivariate Behavioral Research, 2007
This article describes a model for response times that is proposed as a supplement to the usual factor-analytic model for responses to graded or more continuous typical-response items. The use of the proposed model together with the factor model provides additional information about the respondent and can potentially increase the accuracy of the…
Descriptors: Reaction Time, Item Response Theory, Computation, Likert Scales
Peer reviewed Peer reviewed
Direct linkDirect link
Hernandez, Jose M.; Rubio, Victor J.; Revuelta, Javier; Santacreu, Jose – Educational and Psychological Measurement, 2006
Trait psychology implicitly assumes consistency of the personal traits. Mischel, however, argued against the idea of a general consistency of human beings. The present article aims to design a statistical procedure based on an adaptation of the pi* statistic to measure the degree of intraindividual consistency independently of the measure used.…
Descriptors: Personality Traits, Reliability, Test Items, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L.; DeMars, Christine E. – Journal of Educational Measurement, 2006
The validity of inferences based on achievement test scores is dependent on the amount of effort that examinees put forth while taking the test. With low-stakes tests, for which this problem is particularly prevalent, there is a consequent need for psychometric models that can take into account differing levels of examinee effort. This article…
Descriptors: Guessing (Tests), Psychometrics, Inferences, Reaction Time
New Mexico Public Education Department, 2007
The purpose of the NMSBA technical report is to provide users and other interested parties with a general overview of and technical characteristics of the 2007 NMSBA. The 2007 technical report contains the following information: (1) Test development; (2) Scoring procedures; (3) Summary of student performance; (4) Statistical analyses of item and…
Descriptors: Interrater Reliability, Standard Setting, Measures (Individuals), Scoring
Griph, Gerald W. – New Mexico Public Education Department, 2006
The purpose of the NMSBA technical report is to provide users and other interested parties with a general overview of and technical characteristics of the 2006 NMSBA. The 2006 technical report contains the following information: (1) Test development; (2) Scoring procedures; (3) Calibration, scaling, and equating procedures; (4) Standard setting;…
Descriptors: Interrater Reliability, Standard Setting, Measures (Individuals), Scoring