Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 12 |
Descriptor
Item Response Theory | 13 |
Probability | 13 |
Statistics | 13 |
Models | 7 |
Adaptive Testing | 3 |
Classification | 3 |
Computation | 3 |
Computer Assisted Testing | 3 |
Correlation | 3 |
Evaluation | 3 |
Psychometrics | 3 |
More ▼ |
Source
Author
Briana Hennessy | 1 |
Cai, Li | 1 |
Callingham, Rosemary A. | 1 |
Celso-Arellano, Pedro Luis | 1 |
Coronado, Semei | 1 |
Ferrando, Pere J. | 1 |
Fidalgo, Angel M. | 1 |
Glas, Cees A. W. | 1 |
Hauser, Carl | 1 |
He, Wei | 1 |
Hendrawan, Irene | 1 |
More ▼ |
Publication Type
Journal Articles | 11 |
Reports - Research | 9 |
Reports - Evaluative | 3 |
Dissertations/Theses -… | 1 |
Education Level
Elementary Education | 1 |
Elementary Secondary Education | 1 |
Grade 6 | 1 |
Higher Education | 1 |
Intermediate Grades | 1 |
Middle Schools | 1 |
Audience
Location
China | 1 |
Connecticut | 1 |
Mexico | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Early Childhood Longitudinal… | 1 |
What Works Clearinghouse Rating
Briana Hennessy – ProQuest LLC, 2021
State-wide tests are designed to measure student overall ability on grade-level standards. School leaders want fine-grained information on student performance to inform curriculum and instruction. One currently used target scoring method, which compares student scores to expected values is currently used to give this feedback to schools, but there…
Descriptors: Standardized Tests, Academic Standards, Academic Ability, Scoring
Coronado, Semei; Sandoval-Bravo, Salvador; Celso-Arellano, Pedro Luis; Torres-Mata, Ana – European Journal of Contemporary Education, 2018
The purpose of this paper is to analyze the test applied at the eighth Statistics II tournament to students from the University Center for Economic and Administrative Sciences of the University of Guadalajara, for the purpose of determining whether it promotes competitive learning among students. To achieve this, Item Response Theory (IRT) is…
Descriptors: Models, Competition, Item Response Theory, Student Motivation
Wang, Chao; Lu, Hong – Educational Technology & Society, 2018
This study focused on the effect of examinees' ability levels on the relationship between Reflective-Impulsive (RI) cognitive style and item response time in computerized adaptive testing (CAT). The total of 56 students majoring in Educational Technology from Shandong Normal University participated in this study, and their RI cognitive styles were…
Descriptors: Item Response Theory, Computer Assisted Testing, Cognitive Style, Correlation
Maeda, Hotaka; Zhang, Bo – International Journal of Testing, 2017
The omega (?) statistic is reputed to be one of the best indices for detecting answer copying on multiple choice tests, but its performance relies on the accurate estimation of copier ability, which is challenging because responses from the copiers may have been contaminated. We propose an algorithm that aims to identify and delete the suspected…
Descriptors: Cheating, Test Items, Mathematics, Statistics
Nydick, Steven W. – Journal of Educational and Behavioral Statistics, 2014
The sequential probability ratio test (SPRT) is a common method for terminating item response theory (IRT)-based adaptive classification tests. To decide whether a classification test should stop, the SPRT compares a simple log-likelihood ratio, based on the classification bound separating two categories, to prespecified critical values. As has…
Descriptors: Probability, Item Response Theory, Models, Classification
Warrens, Matthijs J. – Psychometrika, 2010
The paper presents inequalities between four descriptive statistics that can be expressed in the form [P-E(P)]/[1-E(P)], where P is the observed proportion of agreement of a "kappa x kappa" table with identical categories, and E(P) is a function of the marginal probabilities. Scott's "pi" is an upper bound of Goodman and Kruskal's "lambda" and a…
Descriptors: Probability, Item Response Theory, Statistics, Evaluation
Hauser, Carl; Thum, Yeow Meng; He, Wei; Ma, Lingling – Educational and Psychological Measurement, 2015
When conducting item reviews, analysts evaluate an array of statistical and graphical information to assess the fit of a field test (FT) item to an item response theory model. The process can be tedious, particularly when the number of human reviews (HR) to be completed is large. Furthermore, such a process leads to decisions that are susceptible…
Descriptors: Test Items, Item Response Theory, Research Methodology, Decision Making
Cai, Li; Monroe, Scott – National Center for Research on Evaluation, Standards, and Student Testing (CRESST), 2014
We propose a new limited-information goodness of fit test statistic C[subscript 2] for ordinal IRT models. The construction of the new statistic lies formally between the M[subscript 2] statistic of Maydeu-Olivares and Joe (2006), which utilizes first and second order marginal probabilities, and the M*[subscript 2] statistic of Cai and Hansen…
Descriptors: Item Response Theory, Models, Goodness of Fit, Probability
Ferrando, Pere J. – Psicologica: International Journal of Methodology and Experimental Psychology, 2010
This article proposes a procedure, based on a global statistic, for assessing intra-individual consistency in a test-retest design with a short-term retest interval. The procedure is developed within the framework of parametric item response theory, and the statistic is a likelihood-based measure that can be considered as an extension of the…
Descriptors: Item Response Theory, Intervals, Psychometrics, Testing
Fidalgo, Angel M.; Scalon, Joao D. – Journal of Psychoeducational Assessment, 2010
In spite of the growing interest in cross-cultural research and assessment, there is little research on statistical procedures that can be used to simultaneously assess the differential item functioning (DIF) across multiple groups. The chief objective of this work is to show a unified framework for the analysis of DIF in multiple groups using one…
Descriptors: Test Bias, Statistics, Evaluation, Item Response Theory
Rock, Donald A. – ETS Research Report Series, 2012
This paper provides a history of ETS's role in developing assessment instruments and psychometric procedures for measuring change in large-scale national assessments funded by the Longitudinal Studies branch of the National Center for Education Statistics. It documents the innovations developed during more than 30 years of working with…
Descriptors: Models, Educational Change, Longitudinal Studies, Educational Development
Watson, Jane M.; Callingham, Rosemary A.; Kelly, Ben A. – Mathematical Thinking and Learning: An International Journal, 2007
This study presents the results of a partial credit Rasch analysis of in-depth interview data exploring statistical understanding of 73 school students in 6 contextual settings. The use of Rasch analysis allowed the exploration of a single underlying variable across contexts, which included probability sampling, representation of temperature…
Descriptors: Statistics, Comprehension, Concept Formation, Probability
Hendrawan, Irene; Glas, Cees A. W.; Meijer, Rob R. – Applied Psychological Measurement, 2005
The effect of person misfit to an item response theory model on a mastery/nonmastery decision was investigated. Furthermore, it was investigated whether the classification precision can be improved by identifying misfitting respondents using person-fit statistics. A simulation study was conducted to investigate the probability of a correct…
Descriptors: Probability, Statistics, Test Length, Simulation