Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 6 |
Descriptor
Testing Programs | 10 |
Evaluation Methods | 3 |
Testing | 3 |
Computation | 2 |
Cutting Scores | 2 |
Evaluation Criteria | 2 |
Foreign Countries | 2 |
Item Analysis | 2 |
Program Descriptions | 2 |
Psychometrics | 2 |
Scoring | 2 |
More ▼ |
Source
International Journal of… | 10 |
Author
Publication Type
Journal Articles | 10 |
Reports - Descriptive | 4 |
Reports - Research | 3 |
Reports - Evaluative | 2 |
Book/Product Reviews | 1 |
Education Level
Elementary Secondary Education | 3 |
Adult Education | 1 |
Higher Education | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
Iowa Tests of Basic Skills | 1 |
Program for International… | 1 |
What Works Clearinghouse Rating
Davis-Becker, Susan L.; Buckendahl, Chad W. – International Journal of Testing, 2013
A critical component of the standard setting process is collecting evidence to evaluate the recommended cut scores and their use for making decisions and classifying students based on test performance. Kane (1994, 2001) proposed a framework by which practitioners can identify and evaluate evidence of the results of the standard setting from (1)…
Descriptors: Standard Setting (Scoring), Evidence, Validity, Cutting Scores
Teachers' Perceptions of Large-Scale Assessment Programs within Low-Stakes Accountability Frameworks
Klinger, Don A.; Rogers, W. Todd – International Journal of Testing, 2011
The intent of this study was to examine the views of teachers regarding the appropriateness of the purposes and uses of the provincial assessments in Alberta and Ontario and the seriousness of the concerns raised about these assessments. These provinces represent educational jurisdictions that use large-scale assessments within a low-stakes…
Descriptors: Testing Programs, Educational Improvement, Measures (Individuals), Foreign Countries
Carlson, Janet F.; Geisinger, Kurt F. – International Journal of Testing, 2012
The test review process used by the Buros Center for Testing is described as a series of 11 steps: (1) identifying tests to be reviewed, (2) obtaining tests and preparing test descriptions, (3) determining whether tests meet review criteria, (4) identifying appropriate reviewers, (5) selecting reviewers, (6) sending instructions and materials to…
Descriptors: Testing, Test Reviews, Evaluation Methods, Evaluation Criteria
Geisinger, Kurt F. – International Journal of Testing, 2012
This article sets the stage for the description of a variety of approaches to test reviewing worldwide. It describes the importance of test reviewing as a protection of the public and of society and also the benefits of this activity for test users, who must choose measures to use in particular situations with particular clients at a particular…
Descriptors: Test Reviews, Evaluation Methods, Evaluation Criteria, Global Approach
Wyse, Adam E.; Mapuranga, Raymond – International Journal of Testing, 2009
Differential item functioning (DIF) analysis is a statistical technique used for ensuring the equity and fairness of educational assessments. This study formulates a new DIF analysis method using the information similarity index (ISI). ISI compares item information functions when data fits the Rasch model. Through simulations and an international…
Descriptors: Test Bias, Evaluation Methods, Test Items, Educational Assessment
Childs, Ruth A.; Jaciw, Andrew P.; Saunders, Kelsey – International Journal of Testing, 2007
Many approaches to standard-setting use item calibration and student score estimation results to structure panelists' tasks. However, this requires collecting standard-setting judgments after the item analysis results are available. The Scoring Guide Alignment approach collects standard-setting judgments during the scoring sessions from teachers…
Descriptors: Testing Programs, Scoring, Item Analysis, Test Items
Breithaupt, Krista; Ariel, Adelaide; Veldkamp, Bernard P. – International Journal of Testing, 2005
This article offers some solutions used in the assembly of the computerized Uniform Certified Public Accountancy (CPA) licensing examination as practical alternatives for operational programs producing large numbers of forms. The Uniform CPA examination was offered as an adaptive multistage test (MST) beginning in April of 2004. Examples of…
Descriptors: Foreign Countries, Testing Programs, Programming, Mathematical Applications

Glas, Cees A. W. – International Journal of Testing, 2002
"Test Scoring" provides insight into psychometric procedures as used by a professional testing company or in large-scale projects. The book contains an overview of standard test theory, a discussion of factor analytic theory, and an exploration of special applications and problems. (SLD)
Descriptors: Educational Testing, Factor Analysis, Measurement Techniques, Psychometrics
Li, Yuan H.; Tompkins, Leroy J. – International Journal of Testing, 2004
The primary objective of this study was to examine the construct validity for the 2 multiple-content testing programs-the multiple-choice Comprehensive Tests of Basic Skills (CTBS/5) together with the performance-based Maryland School Performance Assessment Program (MSPAP)-by evaluating the true-score longitudinal associations among…
Descriptors: Testing Programs, Structural Equation Models, Performance Based Assessment, Multitrait Multimethod Techniques

Yin, Ping; Brennan, Robert L. – International Journal of Testing, 2002
Studied longitudinal changes in performance at both the student and school district level in major content areas of a widely used norm-referenced grade-level testing program. Used data from grades 3 to 4 and from 7 to 8 of the Iowa Tests of Basic Skills (in Iowa). Reports descriptive statistics and empirical norms and reliability estimates for…
Descriptors: Achievement Tests, Elementary Education, Elementary School Students, Longitudinal Studies