Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 5 |
Since 2006 (last 20 years) | 11 |
Descriptor
Classification | 15 |
Error of Measurement | 15 |
Psychometrics | 15 |
Item Response Theory | 5 |
Comparative Analysis | 4 |
Diagnostic Tests | 4 |
Accuracy | 3 |
Correlation | 3 |
Models | 3 |
Reliability | 3 |
Adolescents | 2 |
More ▼ |
Source
Author
Abedi, Jamal | 1 |
Adams, G. R. | 1 |
Bramley, Tom | 1 |
Choi, Jiwon | 1 |
Chow, Sy-Miin | 1 |
De Cat, Jos | 1 |
Desloovere, Kaat | 1 |
Dolan, Conor V. | 1 |
Feys, Hilde | 1 |
Grabovsky, Irina | 1 |
Harris, Deborah J. | 1 |
More ▼ |
Publication Type
Journal Articles | 13 |
Reports - Research | 7 |
Reports - Evaluative | 6 |
Dissertations/Theses -… | 1 |
Information Analyses | 1 |
Reports - Descriptive | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Location
United Kingdom (England) | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Extended Objective Measure of… | 2 |
Work Keys (ACT) | 1 |
What Works Clearinghouse Rating
Olasunkanmi James Kehinde – ProQuest LLC, 2024
The Q-matrix played a key role in implementations of diagnostic classification models (DCMs) or cognitive diagnostic models (CDMs) -- a family of psychometric models that are gaining attention in providing diagnostic information on students' mastery of cognitive attributes or skills. Using two Monte Carlo simulation studies, this dissertation…
Descriptors: Diagnostic Tests, Q Methodology, Learning Trajectories, Sample Size

W. Jake Thompson – Grantee Submission, 2024
Diagnostic classification models (DCMs) are psychometric models that can be used to estimate the presence or absence of psychological traits, or proficiency on fine-grained skills. Critical to the use of any psychometric model in practice, including DCMs, is an evaluation of model fit. Traditionally, DCMs have been estimated with maximum…
Descriptors: Bayesian Statistics, Classification, Psychometrics, Goodness of Fit
Lee, Won-Chan; Kim, Stella Y.; Choi, Jiwon; Kang, Yujin – Journal of Educational Measurement, 2020
This article considers psychometric properties of composite raw scores and transformed scale scores on mixed-format tests that consist of a mixture of multiple-choice and free-response items. Test scores on several mixed-format tests are evaluated with respect to conditional and overall standard errors of measurement, score reliability, and…
Descriptors: Raw Scores, Item Response Theory, Test Format, Multiple Choice Tests
Paulsen, Justin; Valdivia, Dubravka Svetina – Journal of Experimental Education, 2022
Cognitive diagnostic models (CDMs) are a family of psychometric models designed to provide categorical classifications for multiple latent attributes. CDMs provide more granular evidence than other psychometric models and have potential for guiding teaching and learning decisions in the classroom. However, CDMs have primarily been conducted using…
Descriptors: Psychometrics, Classification, Teaching Methods, Learning Processes
Grabovsky, Irina; Wainer, Howard – Journal of Educational and Behavioral Statistics, 2017
In this article, we extend the methodology of the Cut-Score Operating Function that we introduced previously and apply it to a testing scenario with multiple independent components and different testing policies. We derive analytically the overall classification error rate for a test battery under the policy when several retakes are allowed for…
Descriptors: Cutting Scores, Weighted Scores, Classification, Testing
Molenaar, Dylan; Dolan, Conor V.; de Boeck, Paul – Psychometrika, 2012
The Graded Response Model (GRM; Samejima, "Estimation of ability using a response pattern of graded scores," Psychometric Monograph No. 17, Richmond, VA: The Psychometric Society, 1969) can be derived by assuming a linear regression of a continuous variable, Z, on the trait, [theta], to underlie the ordinal item scores (Takane & de Leeuw in…
Descriptors: Simulation, Regression (Statistics), Psychometrics, Item Response Theory
Jiao, Hong; Kamata, Akihito; Wang, Shudong; Jin, Ying – Journal of Educational Measurement, 2012
The applications of item response theory (IRT) models assume local item independence and that examinees are independent of each other. When a representative sample for psychometric analysis is selected using a cluster sampling method in a testlet-based assessment, both local item dependence and local person dependence are likely to be induced.…
Descriptors: Item Response Theory, Test Items, Markov Processes, Monte Carlo Methods
Yang, Manshu; Chow, Sy-Miin – Psychometrika, 2010
Facial electromyography (EMG) is a useful physiological measure for detecting subtle affective changes in real time. A time series of EMG data contains bursts of electrical activity that increase in magnitude when the pertinent facial muscles are activated. Whereas previous methods for detecting EMG activation are often based on deterministic or…
Descriptors: Test Bias, Error of Measurement, Human Body, Diagnostic Tests
Heyrman, Lieve; Molenaers, Guy; Desloovere, Kaat; Verheyden, Geert; De Cat, Jos; Monbaliu, Elegast; Feys, Hilde – Research in Developmental Disabilities: A Multidisciplinary Journal, 2011
In this study the psychometric properties of the Trunk Control Measurement Scale (TCMS) in children with cerebral palsy (CP) were examined. Twenty-six children with spastic CP (mean age 11 years 3 months, range 8-15 years; Gross Motor Function Classification System level I n = 11, level II n = 5, level III n = 10) were included in this study. To…
Descriptors: Construct Validity, Cerebral Palsy, Test Validity, Interrater Reliability
Bramley, Tom – Educational Research, 2010
Background: A recent article published in "Educational Research" on the reliability of results in National Curriculum testing in England (Newton, "The reliability of results from national curriculum testing in England," "Educational Research" 51, no. 2: 181-212, 2009) suggested that: (1) classification accuracy can be…
Descriptors: National Curriculum, Educational Research, Testing, Measurement
Kupermintz, Haggai – Journal of Educational Measurement, 2004
A decision-theoretic approach to the question of reliability in categorically scored examinations is explored. The concepts of true scores and errors are discussed as they deviate from conventional psychometric definitions and measurement error in categorical scores is cast in terms of misclassifications. A reliability measure based on…
Descriptors: Test Reliability, Error of Measurement, Psychometrics, Test Theory

Wang, Tianyou; Kolen, Michael J.; Harris, Deborah J. – Journal of Educational Measurement, 2000
Describes procedures for calculating conditional standard error of measurement (CSEM) and reliability of scale scores and classification of consistency of performance levels. Applied these procedures to data from the American College Testing Program's Work Keys Writing Assessment with sample sizes of 7,097, 1,035, and 1,793. Results show that the…
Descriptors: Adults, Classification, Error of Measurement, Item Response Theory

Adams, G. R. – Journal of Adolescence, 1994
It has been proposed that the Objective Measure of Ego Identity Status and its scoring criteria should be adjusted to utilize a half standard deviation cutoff. Evidence is provided to support the claim that less stringent criteria will lead to higher numbers classified while arriving at the same research results. This proposal is considered and,…
Descriptors: Classification, Data Interpretation, Error of Measurement, Identification (Psychology)

Jones, R. M.; And Others – Journal of Adolescence, 1994
Results from this study indicate that a cutoff consisting of the mean plus a half standard deviation is more desirable than the original mean plus one standard deviation strategy for categorizing respondents into a "pure" identity status. Status-specific comparisons indicated groups were not significantly different on measures of…
Descriptors: Adolescents, Classification, Data Interpretation, Error of Measurement
Abedi, Jamal – Teachers College Record, 2006
Assessments in English that are constructed for native English speakers may not provide valid inferences about the achievement of English language learners (ELLs). The linguistic complexity of the test items that are not related to the content of the assessment may increase the measurement error, thus reducing the reliability of the assessment.…
Descriptors: Second Language Learning, Test Items, Psychometrics, Inferences