Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 4 |
Descriptor
Error Patterns | 5 |
Testing Programs | 5 |
Psychometrics | 3 |
Test Items | 3 |
Computation | 2 |
Equations (Mathematics) | 2 |
Evaluation Methods | 2 |
Item Response Theory | 2 |
Scoring | 2 |
Simulation | 2 |
Comparative Analysis | 1 |
More ▼ |
Source
Applied Measurement in… | 1 |
Applied Psychological… | 1 |
Canadian Journal of School… | 1 |
International Journal of… | 1 |
Journal of Educational… | 1 |
Author
Barford, Sean W. | 1 |
Chan, Tsze | 1 |
Cohen, Jon | 1 |
Dombrowski, Stefan C. | 1 |
Huynh, Huynh | 1 |
Janzen, Troy M. | 1 |
Jiang, Tao | 1 |
Krawchuk, Lindsey L. | 1 |
Mapuranga, Raymond | 1 |
Mrazik, Martin | 1 |
Puhan, Gautam | 1 |
More ▼ |
Publication Type
Journal Articles | 5 |
Reports - Evaluative | 3 |
Reports - Research | 2 |
Education Level
Elementary Secondary Education | 1 |
Higher Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 1 |
Wechsler Intelligence Scale… | 1 |
What Works Clearinghouse Rating
Mrazik, Martin; Janzen, Troy M.; Dombrowski, Stefan C.; Barford, Sean W.; Krawchuk, Lindsey L. – Canadian Journal of School Psychology, 2012
A total of 19 graduate students enrolled in a graduate course conducted 6 consecutive administrations of the Wechsler Intelligence Scale for Children, 4th edition (WISC-IV, Canadian version). Test protocols were examined to obtain data describing the frequency of examiner errors, including administration and scoring errors. Results identified 511…
Descriptors: Intelligence Tests, Intelligence, Statistical Analysis, Scoring
Puhan, Gautam – Applied Measurement in Education, 2009
The purpose of this study is to determine the extent of scale drift on a test that employs cut scores. It was essential to examine scale drift for this testing program because new forms in this testing program are often put on scale through a series of intermediate equatings (known as equating chains). This process may cause equating error to…
Descriptors: Testing Programs, Testing, Measurement Techniques, Item Response Theory
Wyse, Adam E.; Mapuranga, Raymond – International Journal of Testing, 2009
Differential item functioning (DIF) analysis is a statistical technique used for ensuring the equity and fairness of educational assessments. This study formulates a new DIF analysis method using the information similarity index (ISI). ISI compares item information functions when data fits the Rasch model. Through simulations and an international…
Descriptors: Test Bias, Evaluation Methods, Test Items, Educational Assessment
Cohen, Jon; Chan, Tsze; Jiang, Tao; Seburn, Mary – Applied Psychological Measurement, 2008
U.S. state educational testing programs administer tests to track student progress and hold schools accountable for educational outcomes. Methods from item response theory, especially Rasch models, are usually used to equate different forms of a test. The most popular method for estimating Rasch models yields inconsistent estimates and relies on…
Descriptors: Testing Programs, Educational Testing, Item Response Theory, Computation

Huynh, Huynh – Journal of Educational Statistics, 1990
False positive and false negative error rates were studied for competency testing when failing examinees are permitted to retake the test. Formulas are provided for the beta-binomial and Rasch models. Estimates based on these models are compared for six data sets from the South Carolina Basic Skills Assessment Program. (SLD)
Descriptors: Elementary Secondary Education, Equations (Mathematics), Error Patterns, Estimation (Mathematics)