Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 6 |
Descriptor
Testing Programs | 6 |
Evaluation Methods | 3 |
Computation | 2 |
Cutting Scores | 2 |
Evaluation Criteria | 2 |
Item Analysis | 2 |
Program Descriptions | 2 |
Standard Setting (Scoring) | 2 |
Test Items | 2 |
Test Reviews | 2 |
Testing | 2 |
More ▼ |
Source
International Journal of… | 6 |
Author
Publication Type
Journal Articles | 6 |
Reports - Descriptive | 4 |
Reports - Evaluative | 1 |
Reports - Research | 1 |
Education Level
Elementary Secondary Education | 3 |
Adult Education | 1 |
Higher Education | 1 |
Audience
Location
Canada | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 1 |
What Works Clearinghouse Rating
Davis-Becker, Susan L.; Buckendahl, Chad W. – International Journal of Testing, 2013
A critical component of the standard setting process is collecting evidence to evaluate the recommended cut scores and their use for making decisions and classifying students based on test performance. Kane (1994, 2001) proposed a framework by which practitioners can identify and evaluate evidence of the results of the standard setting from (1)…
Descriptors: Standard Setting (Scoring), Evidence, Validity, Cutting Scores
Teachers' Perceptions of Large-Scale Assessment Programs within Low-Stakes Accountability Frameworks
Klinger, Don A.; Rogers, W. Todd – International Journal of Testing, 2011
The intent of this study was to examine the views of teachers regarding the appropriateness of the purposes and uses of the provincial assessments in Alberta and Ontario and the seriousness of the concerns raised about these assessments. These provinces represent educational jurisdictions that use large-scale assessments within a low-stakes…
Descriptors: Testing Programs, Educational Improvement, Measures (Individuals), Foreign Countries
Carlson, Janet F.; Geisinger, Kurt F. – International Journal of Testing, 2012
The test review process used by the Buros Center for Testing is described as a series of 11 steps: (1) identifying tests to be reviewed, (2) obtaining tests and preparing test descriptions, (3) determining whether tests meet review criteria, (4) identifying appropriate reviewers, (5) selecting reviewers, (6) sending instructions and materials to…
Descriptors: Testing, Test Reviews, Evaluation Methods, Evaluation Criteria
Geisinger, Kurt F. – International Journal of Testing, 2012
This article sets the stage for the description of a variety of approaches to test reviewing worldwide. It describes the importance of test reviewing as a protection of the public and of society and also the benefits of this activity for test users, who must choose measures to use in particular situations with particular clients at a particular…
Descriptors: Test Reviews, Evaluation Methods, Evaluation Criteria, Global Approach
Wyse, Adam E.; Mapuranga, Raymond – International Journal of Testing, 2009
Differential item functioning (DIF) analysis is a statistical technique used for ensuring the equity and fairness of educational assessments. This study formulates a new DIF analysis method using the information similarity index (ISI). ISI compares item information functions when data fits the Rasch model. Through simulations and an international…
Descriptors: Test Bias, Evaluation Methods, Test Items, Educational Assessment
Childs, Ruth A.; Jaciw, Andrew P.; Saunders, Kelsey – International Journal of Testing, 2007
Many approaches to standard-setting use item calibration and student score estimation results to structure panelists' tasks. However, this requires collecting standard-setting judgments after the item analysis results are available. The Scoring Guide Alignment approach collects standard-setting judgments during the scoring sessions from teachers…
Descriptors: Testing Programs, Scoring, Item Analysis, Test Items