NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
W. Holmes Finch; Jerrell C. Cassady; C. Addison Helsper – International Journal of Testing, 2024
The Academic Anxiety Scale (AAS; Cassady, 2022; Cassady et al., 2019) is a measure of the construct academic anxiety, which is a generalized representation of anxieties experienced by learners in educational settings. Academic anxiety has been identified as a preclinical indicator of anxiety that provides important predictive utility to clinical…
Descriptors: Validity, Anxiety, Academic Achievement, Behavior Rating Scales
Peer reviewed Peer reviewed
Direct linkDirect link
Ravand, Hamdollah; Baghaei, Purya – International Journal of Testing, 2020
More than three decades after their introduction, diagnostic classification models (DCM) do not seem to have been implemented in educational systems for the purposes they were devised. Most DCM research is either methodological for model development and refinement or retrofitting to existing nondiagnostic tests and, in the latter case, basically…
Descriptors: Classification, Models, Diagnostic Tests, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Aksu Dunya, Beyza – International Journal of Testing, 2018
This study was conducted to analyze potential item parameter drift (IPD) impact on person ability estimates and classification accuracy when drift affects an examinee subgroup. Using a series of simulations, three factors were manipulated: (a) percentage of IPD items in the CAT exam, (b) percentage of examinees affected by IPD, and (c) item pool…
Descriptors: Adaptive Testing, Classification, Accuracy, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Bradshaw, Laine P.; Madison, Matthew J. – International Journal of Testing, 2016
In item response theory (IRT), the invariance property states that item parameter estimates are independent of the examinee sample, and examinee ability estimates are independent of the test items. While this property has long been established and understood by the measurement community for IRT models, the same cannot be said for diagnostic…
Descriptors: Classification, Models, Simulation, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Briggs, Derek C.; Circi, Ruhan – International Journal of Testing, 2017
Artificial Neural Networks (ANNs) have been proposed as a promising approach for the classification of students into different levels of a psychological attribute hierarchy. Unfortunately, because such classifications typically rely upon internally produced item response patterns that have not been externally validated, the instability of ANN…
Descriptors: Artificial Intelligence, Classification, Student Evaluation, Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Kunina-Habenicht, Olga; Rupp, André A.; Wilhelm, Oliver – International Journal of Testing, 2017
Diagnostic classification models (DCMs) hold great potential for applications in summative and formative assessment by providing discrete multivariate proficiency scores that yield statistically driven classifications of students. Using data from a newly developed diagnostic arithmetic assessment that was administered to 2032 fourth-grade students…
Descriptors: Grade 4, Foreign Countries, Classification, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Chiu, Chia-Yi; Köhn, Hans-Friedrich; Wu, Huey-Min – International Journal of Testing, 2016
The Reduced Reparameterized Unified Model (Reduced RUM) is a diagnostic classification model for educational assessment that has received considerable attention among psychometricians. However, the computational options for researchers and practitioners who wish to use the Reduced RUM in their work, but do not feel comfortable writing their own…
Descriptors: Educational Diagnosis, Classification, Models, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, W. Holmes; Hernández Finch, Maria E.; French, Brian F. – International Journal of Testing, 2016
Differential item functioning (DIF) assessment is key in score validation. When DIF is present scores may not accurately reflect the construct of interest for some groups of examinees, leading to incorrect conclusions from the scores. Given rising immigration, and the increased reliance of educational policymakers on cross-national assessments…
Descriptors: Test Bias, Scores, Native Language, Language Usage