NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 6 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Stemler, Steven E.; Naples, Adam – Practical Assessment, Research & Evaluation, 2021
When students receive the same score on a test, does that mean they know the same amount about the topic? The answer to this question is more complex than it may first appear. This paper compares classical and modern test theories in terms of how they estimate student ability. Crucial distinctions between the aims of Rasch Measurement and IRT are…
Descriptors: Item Response Theory, Test Theory, Ability, Computation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Huebner, Alan; Skar, Gustaf B. – Practical Assessment, Research & Evaluation, 2021
Writing assessments often consist of students responding to multiple prompts, which are judged by more than one rater. To establish the reliability of these assessments, there exist different methods to disentangle variation due to prompts and raters, including classical test theory, Many Facet Rasch Measurement (MFRM), and Generalizability Theory…
Descriptors: Error of Measurement, Test Theory, Generalizability Theory, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wolkowitz, Amanda; Davis-Becker, Susan – Practical Assessment, Research & Evaluation, 2015
This study evaluates the impact of common item characteristics on the outcome of equating in credentialing examinations when traditionally recommended representation is not possible. This research used real data sets from several credentialing exams to test the impact of content representation, item statistics, and number of common items on…
Descriptors: Test Items, Equated Scores, Licensing Examinations (Professions), Test Content
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ravand, Hamdollah – Practical Assessment, Research & Evaluation, 2015
Cognitive diagnostic models (CDM) have been around for more than a decade but their application is far from widespread for mainly two reasons: (1) CDMs are novel, as compared to traditional IRT models. Consequently, many researchers lack familiarity with them and their properties, and (2) Software programs doing CDMs have been expensive and not…
Descriptors: Test Theory, Models, Computer Software, Open Source Technology
Peer reviewed Peer reviewed
Direct linkDirect link
Lyren, Per-Erik – Practical Assessment, Research & Evaluation, 2009
The added value of reporting subscores on a college admission test (SweSAT) was examined in this study. Using a CTT-derived objective method for determining the value of reporting subscores, it was concluded that there is added value in reporting section scores (Verbal/Quantitative) as well as subtest scores. These results differ from a study of…
Descriptors: College Entrance Examinations, Scores, Test Theory, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Wiberg, Marie; Sundstrom, Anna – Practical Assessment, Research & Evaluation, 2009
A common problem in predictive validity studies in the educational and psychological fields, e.g. in educational and employment selection, is restriction in range of the predictor variables. There are several methods for correcting correlations for restriction of range. The aim of this paper was to examine the usefulness of two approaches to…
Descriptors: Predictive Validity, Predictor Variables, Correlation, Mathematics