Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 5 |
Descriptor
Scores | 7 |
Test Bias | 7 |
Computer Assisted Testing | 2 |
Factor Analysis | 2 |
Item Response Theory | 2 |
Measurement | 2 |
Psychometrics | 2 |
Scoring | 2 |
Test Construction | 2 |
Test Interpretation | 2 |
Test Items | 2 |
More ▼ |
Source
Educational Measurement:… | 7 |
Author
Armstrong, Anne-Marie | 1 |
Childs, Ruth A. | 1 |
Dolan, Conor V. | 1 |
Gattamorta, Karina | 1 |
Guion, Robert M. | 1 |
Kaira, Leah | 1 |
Li, Xueming | 1 |
Mislevy, Robert J. | 1 |
Penfield, Randall D. | 1 |
Pommerich, Mary | 1 |
Randall, Jennifer | 1 |
More ▼ |
Publication Type
Journal Articles | 7 |
Reports - Research | 4 |
Opinion Papers | 3 |
Education Level
Grade 10 | 1 |
High Schools | 1 |
Secondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of… | 1 |
What Works Clearinghouse Rating
Evaluating the Comparability of Paper- and Computer-Based Science Tests across Sex and SES Subgroups
Randall, Jennifer; Sireci, Stephen; Li, Xueming; Kaira, Leah – Educational Measurement: Issues and Practice, 2012
As access and reliance on technology continue to increase, so does the use of computerized testing for admissions, licensure/certification, and accountability exams. Nonetheless, full computer-based test (CBT) implementation can be difficult due to limited resources. As a result, some testing programs offer both CBT and paper-based test (PBT)…
Descriptors: Science Tests, Computer Assisted Testing, Scores, Test Bias
Mislevy, Robert J. – Educational Measurement: Issues and Practice, 2012
This article presents the author's observations on Neil Dorans's NCME Career Award Address: "The Contestant Perspective on Taking Tests: Emanations from the Statue within." He calls attention to some points that Dr. Dorans made in his address, and offers his thoughts in response.
Descriptors: Testing, Test Reliability, Psychometrics, Scores
Pommerich, Mary – Educational Measurement: Issues and Practice, 2012
Neil Dorans has made a career of advocating for the examinee. He continues to do so in his NCME career award address, providing a thought-provoking commentary on some current trends in educational measurement that could potentially affect the integrity of test scores. Concerns expressed in the address call attention to a conundrum that faces…
Descriptors: Testing, Scores, Measurement, Test Construction
Penfield, Randall D.; Gattamorta, Karina; Childs, Ruth A. – Educational Measurement: Issues and Practice, 2009
Traditional methods for examining differential item functioning (DIF) in polytomously scored test items yield a single item-level index of DIF and thus provide no information concerning which score levels are implicated in the DIF effect. To address this limitation of DIF methodology, the framework of differential step functioning (DSF) has…
Descriptors: Test Bias, Test Items, Evaluation Methods, Scores
Wicherts, Jelte M.; Dolan, Conor V. – Educational Measurement: Issues and Practice, 2010
Measurement invariance with respect to groups is an essential aspect of the fair use of scores of intelligence tests and other psychological measurements. It is widely believed that equal factor loadings are sufficient to establish measurement invariance in confirmatory factor analysis. Here, it is shown why establishing measurement invariance…
Descriptors: Factor Structure, Intelligence Tests, Intelligence Quotient, Factor Analysis

Guion, Robert M. – Educational Measurement: Issues and Practice, 1995
This commentary discusses three essential themes in performance assessment and its scoring. First, scores should mean something. Second, performance scores should permit fair and meaningful comparisons. Third, validity-reducing errors should be minimal. Increased attention to performance assessment may overcome these problems. (SLD)
Descriptors: Educational Assessment, Performance Based Assessment, Scores, Scoring

Armstrong, Anne-Marie – Educational Measurement: Issues and Practice, 1993
The effects of test performance of differentially written multiple-choice tests and test takers' cognitive style were studied for 47 graduate students and 35 public school and college teachers. Adhering to test-writing item guidelines resulted in mean scores basically the same for two groups of differing cognitive style. (SLD)
Descriptors: Cognitive Style, College Faculty, Comparative Testing, Graduate Students